0:00

Okay, hi folks. So, we're talking about learning still,

and we're going to talk more about the DeGroot model now, and try and understand

some of its convergence properties. And one, one thing just to start with,

let me, sort of give you reiterate a little bit on the structure of the model

here. So, we have a bi of t looking like the

sum over js of Tij bj at t minus 1. So how much the person i puts on

different people's weight of beliefs in the previous period.

So it's a very simple model. The nice thing about this is that if we

want to think about the belief structure. If we're talking about what the beliefs

b1, so we represent this as a vector of b2's belief at t and so forth down to bn

of t. This is just equal to this T matrix times

the vector b1 at t minus 1 and so forth, bn, t minus 1.

Right. So, when we're thinking about what this

looks like, we can think of this as represent, as b at time t, as a vector,

as just equal to t times b at time t minus 1.

Okay. So this fact that this thing looks like a

very simple vector means that it's very easy to work with in terms of updating

and some of the mathematics behind it. And in particular this means that we can

represent the belief at some time t, b of t is just going to be equal to T raised

to the tth power, times b at 0. Right?

because b1 is equal to t times this, right?

So to get b1 you do this, to get t2 you do the belief.

Right? So, we're just raising this to t powers.

And, the reason that this model's going to be so nice to work with is that

we know a lot about matrices raised to powers, and in particular matrices where

the rows all sum to one and are non-negative.

Those are nicely behaved matrices and there's a lot of study that's been

developed in, in different areas of mathematics, in particular in the study

of Markov chains which have these kinds of properties in understanding these kind

of, of mathematics. So the beliefs take a very simple form,

and what we can do now is begin to ask questions about, when is there

convergence, when is there a consensus, who has influence, and when is there

convergence, when is there consensus. That's just going to depend on properties

of this matrix and who has influence will also depend on what this matrix ends up

looking like when we raise this to multiple, to higher and higher powers,

what does it end up looking like? Who ends up with a lot of weight in that

system? Okay?

So, understanding this is really going to depend on those, these kinds of

properties. Okay, so let's talk a little bit about

convergence. So let's start with an example a very

simple example person 3 here just listens to person 2, 2 just listens to 1, and 1

puts equal weight on 2 and 3. Okay, so this is a world where nobody

pays attention to themselves, they're all listening to somebody else and who

they're listening to depends on who they are.

3:51

this is a world that will converge. And let's start with say, example, you

know, belief 1 for person 1 0 for person 2 and 3.

And so that's this initial belief. And what's going to happen, well so

person 1 starts with the belief of 1. person 2 is going to take over belief,

right, they're going to update. So after that we can go through, what's

the next belief going to be while person 2 is now putting weight 1 on 1.

They're going to get this. Person 1's averaging 2 zeros.

They're going to get a zero, person 3 is, is just paying attention to 2, so they're

going to get a zero, so the belief after one period goes to this.

You can, you know, keep looking at this and, and so forth.

this will eventually converge to two-fifths, two-fifths, two-fifths.

So if you look at what this process will converges to it's going to be two-fifths

for each agent and it, and it converges fairly nicely.

Okay. Let's look at another example.

And all I'm going to do now is switch. So, before, 3 was listening to 2.

And we're going to move that and have instead 3 listen to 1.

Okay? So, this was a 1 before.

And we're going to move the weight. Instead of 3 listening to 3, 3's now

going to listen to 1. Okay?

And let's look at what happens in this particular setting.

So now we're in a setting, and let's start with the same beliefs we did

before. So all we did was change 3's belief from

listening to 2 to listening to 1. And what happens now?

Well, we start with these beliefs And after one period, 1 is going to now

believe 0. But both 3 and 2 are listening to 1, so

they're both going to swtich to 1. Okay?

okay, so, so now we've got a situation where both 2 and 3 believe 1 and 1

believes 0. And 1 is listening to 2 and 3, and 2 and

3 are listening to 1 so they're going to switch again, right?

5:56

And then they're going to switch again, and switch again, and so forth.

So what is happening here is that keep just exchanging beliefs.

An even easier example of something like this would just be, you know, if, if

person 1 listened to person 2 and person 2 just listened to person 1, they're just

going to keep swapping beliefs back and forth.

And that's effectively what's going on here.

So, we get blinking, we get no convergence in this, in this setting,

okay? So the beliefs just keep changing over

time, they never converge. And, obviously, this is, you know, fairly

naive in the sense that the agents don't realize they keep exchanging beliefs back

and forth. And they keep doing that infinitely

often. But this is, is a system that's not

convergent. Okay, Now what's true about this, in

order for this to happen. it's actually based on the cycles in this

graph and here all the cycles are of length two.

And we've got the beliefs in, in such a way that they keep blinking back and

forth, and so it's possible for the beliefs to keep switching back and forth

on these cycles. And so it's there's a cyclicity in the

graph which gets is possible to reproduce in the beliefs.

Okay? Now, it could be that, you know, for

other beliefs you would, might have converged.

If we started everybody with a belief of 1, they would've converged to a belief of

1. So it depends on which things which we

choose but whenever there are even cycles, all the cycles are even in, in a

given graph, then we could find situations where we're going to get

non-convergence. So more generally let's actually look at

the conditions which are going to define convergence.

So let's say that we'll, we'll starting with some particular network of weights

that people put on different individuals. We'll say that that society converges.

T converges, if the limit of this process raised to the t.

So the limiting belief exists for all initial beliefs.

In, in so, so no matter what initial beliefs we started with, we would end up

with convergence of this overall belief. Okay?

So, that'll be a definition. And in terms of characterizing this we'll

say that T is aperiodic if the greatest common divisor of all its simple cycles

is 1, okay? So, for instance, of these two things we

looked at before here we could find a cycle, so let's look at this, we have a

cycle of length three. So we have one cycle of length three, we

have a cycle of length two. The greatest common deduct, divisor of

three and two they don't divide into each other, the only thing that divides both

of those is one. Greatest common divisor is one.

This thing is aperiodic. Right.

So this one is aperiodic. What do those cycles look like over here?

Well, there's a cycle of length two. Here's a cycle of length two.

8:56

Here's a cycle of length four. All the cycles are even.

All cycles are 2, 4, 6, 8, etc. Greatest common divisor, 2.

So, this one is periodic. It's not aperiodic, not aperiodic.

Okay? So when we, when we're looking at, at the

structure of these matrices, what turns out to be true is that this aperiodicity

is going to be what defines whether or not you get convergence of this process.

So in particular, there's a theorem, which you can get out of a variety of

sources. But basically can be derived from work in

Markov chain theory and, and more generally in linear algebra.

Suppose that we have Tb strongly connected.

And let me say a little about what strongly connected means.

Strongly connected is going to say that from every given individual, there exists

a path, a, a directed path from i to j. So there exists a directed path from i to

j for all i, j. Okay.

So basically, we're not in a situation where some people could never end up

getting information from other individuals, everybody could eventually

hear from everybody else. Okay?

If you, if you have non-strong connection, the con, the characterization

here gets a little more complicated. if you're interested in that, I have a

paper with Ben Golub in 2010 which gives the complete characterization of

convergence for non-strongly connected networks.

We'll just deal with the strongly-connected ones, and that's most

of the intuition. Okay.

So what's the theorem say? The theorem says that you get convergence

if and only if you get aperiodicity. So that aperiodic aperiodicity is

necessary and sufficient for convergence. and separate parts, so first part is

aperiodicity is going to give you convergence.

And in the second part of, that we learn is if things are convergent, then we're

also going to have that the limit of this matrix actually looks like every row is

going to converge to having, so if we look at, at what T raised and, and a

limit raised to the infinite power looks like.

What it looks like is a vector. A set of each row has entries S1 to Sn,

the same S1 to Sn. So, everybody ends up putting this in the

limit the same weights on other people's initial beliefs indirectly.

Where s is the unique left hand side eigenvector that has eigen value 1.

Okay, so that's a mouthful, but it says that in fact, what this is, is the same

eigenvector that we were talking about when we talked about eigenvectors

centrality. So this an eigenvector that has

eigenvalue 1. in this case it's going to be the unique

one if we've got strongly connected and aperiodicity.

this thing's going to be convergent and it's going to have nicely-defined weights

here S1 through Sn, and in fact, they will all be non-negative, they'll all be

positive, in fact. So, so these is a very powerful set of

results here. It tells us, aperiodicity, gives us

convergence, moreover, convergence occurs to a very specific limit and that limit's

given by the left hand side unit eigenvector of the matrix.

And that's where those 2 sevenths and 2 sevenths, 2 sevenths came from and the

other numbers that we found in those examples before.

Okay. So, what we're going to look at next is

I'll go through how you prove this. it involves a bit linear algebra and some

theorems from that, so if you're you can skip that if you like.

If you want to see the details I'll explain how you can derive that kind of

theorem. And then afterwards we'll begin to put

this in process. So what we've got is convergence from

aperiodicity plus we know what the limit looks like so we'll then talk a little

more in detail about what this S vector actually means; what it comes from.

And look at some examples and, and try and understand how we might use this,

this model a bit.