So let's briefly summarize, what we've learned in this lecture. We talked a lot about the six degree of separation, especially towards the beginning. We first looked at Milgram's experiment, which was kind of the canonical example which first saw the six degree of separation occurring in a realistic social network. And then we saw other examples in society we, we looked at for instance, all the different six degrees of Kevin Bacon, Urdoff's number. We looked at Facebook having the degree of separation be 4.74, at least in 2011 and so forth. And so we saw how we can see that six degrees in a lot of different social scenarios, and how that really became a big question, as to how that was possible. And then we looked at what the properties of sorts, such a small world would have to be in order to fit our intuition of what a network would look like. So we have to be able to both save these properties in addition to be able to have this six degree of separation property. So we saw that the small world has to have a large clustering coefficient. Right? So that's, and that's quantified in terms of these tri-closures of social relationships. The number of tri-closures per connected triples. And so it has to have a large clustering coefficient. To indicate that you tend to be paired up with your own friends, and also to have himotholy property. Basically, saying people tend to group in pairs together if they have similar interests, which also implies that your friends are going to tend to be friends with your own friends and so forth. So there's a lot of transitivity going on, but also a small average shortest distance, right? So, you have to be able to also get far across the network and relatively [INAUDIBLE] number of hops. Right. By these path lengths that would occur more so randomly so across the graph. And so for the graph models that we looked at, we looked at the a few different ways that we maybe able to have this smallest property. We looked at this random versus regular graph, neither of which could really provide either of these things. Right? because the random graph we saw had a small clustering coefficient, which has been. Also a small average shortest distance, whereas the regular graph had a large clustering coefficient, but also a large average shortest distance. And they were at start contrast to each other, right? So the, the random graph we would establish links just literally randomly in the network. Whereas the regular graph had a really set structure. That depended on two properties that would entirely determine what it looks like. So it set on this sort of a ring structure. And then, we also saw how we would then connect to our closest neighbors. Therefore, really exhibiting that homophily property and so forth. It's very hard to draw one of these things so that's why not doing a very good job at it, but you get the idea. You can look back at the examples in the lecture that were drawn, and also we looked at then Watts-Strogatz Model which got us the structural small-world of having a large clustering coefficient and a small average shortest distance. And in this model we saw how we would start with the ring structure, or the regular graph structure then we would randomly add some long range links between nodes. Right? So that would capture the two things. One of having a large clustering coefficient among your immediate friends, but then a few long range links which could, which could dramatically reduce the average shortest distance between pairs of nodes. But also wouldn't affect the clustering coefficient as much, and the reason for that again, was because the average shortest distance was an extremal quantity, whereas the clustering coefficient in the case of regular graph is more of an average overall nodes. And then we looked at the Watts-Dodds-Newman model because we said, okay well. That, that gives us a structural small world but how is it that through greedy social search, we could really be able to find these short paths. Be able to get from one node to another using only local information. And in the Watts-Strogatz gap it wasnt clear how we'd be able to do that, because it wouldn't be clear whether, when to take those long range links. So the Watts-Dodds-Newman model took us that extra mile of being able to have both structural and algorithmic small worlds. So we didn't look at a lot of the mathematical details behind the WDN model of being able to have small worlds. But the idea was that the paths that you would find, the search length paths that you would find through such a greedy social search mechanism, would be close enough at least to the optimal path. So the lengths that you would get if you would follow the optimal paths such that we could conclude that those small, that those small short paths were indeed discoverable in addition to just existing. The major things that we saw here. The first one was again, graphs and networks. Like a lot, a lot of this was just being able to see different graph structures, look at different type of graphs and how we can sort of take these social scenarios, and be able to represent them in graphs with nodes and links. And also the concept of bigger and bigger, right? As even as the network grows huge and large inside, we have the social networks with billions and billions of people, we still have small world. And that's indeed a very important concept today, is it exhibited in a lot of different situations and scenarios. And it's also a concept that's greatly researched, even widely researched today on the many different fields.