0:00

'Kay, so to finish up some of our discussion the DeGroot model.

Let's ask a question now of, of sort of applying this and understanding when it

is that a society of people who are updating in this sort of just weighted

average method. Are actually going to come to a consensus

which is accurate, right? So when is it that their beliefs will

actually be reasonably accurate beliefs? so information aggregation in this

setting. So is the consensus that they're going to

come to accurate. so we have to put some structure on this

and actually this question the reason I originally become interested in DeGroot

model was out of conversations with a student, a former student of mine Ben

Golub /g. And we started asking the question of

when was it that people's beliefs would actually converge to the right sort of

thing even if they were acting in a fairly simple form.

And so how does this depend on network structure?

How does it depend on the influence? How does it relate to speed of

convergence? There's a whole series of questions that

we can ask here. And we'll just take a quick look at how

this looks in the context of the DeGroot model.

And let's suppose that there is a true state out there which we'll call mu.

So this nature, for example there's really going to be some probability that

there's going to be global warming. And that's something that that true and

it's out there and each person's belief at time 0 is, is is different from the

true belief. So there's some true number out there,

and everybody has some error. And the error what we're going to make

sure of is that the errors of different individuals have 0 mean and finite

variance. Okay, and if you want to keep all these

things in 0, 1, you can do that if you like they don't have to be in 0, 1.

You can keep them in 0, 1 by, by making sure that the variance is actually

bounded so that you, you can't have your belief differ from mu by.

So that beliefs don't go outside 0, 1, but that's actually not necessary for

this analysis. Okay, so we've got the beliefs, everybody

has some error, and now we run into group model.

And so what we'd like to have is, if people keep talking to themselves,

talking to different individuals it would be nice if the, the situation eventually

converged to a true mu. So that by talking to people we would

actually really learn what the true mu was.

And here we can allow these, these epsilon i's to have different variances

across individuals but let's make sure that they're independent conditional on

mu. So each, the, the errors that one person

makes, so one person might be have a high belief, another person has a low belief.

So people are making errors, but those errors aren't correlated.

Okay. So, let's consider lie, large societies,

so we want to ask when is the crowd going to be wise.

So when is it when they get together and talk to a network, that they're actually

going to come to a reasonably accurate mu.

So what we want is, we want a situation where the probability that the limit as

we get look at the limiting beliefs over time compared to mu.

The probability that that's differing by more than some delta vanishes in large

societies. Okay, so there's a bunch of quantifiers

here, so we're looking at the limit belief.

And we want to make sure that the limit belief being bigger than some delta, that

vanishes and we want to do that as a society becomes large.

Okay? So larger and larger societies, when is

it that they're going to be accurate. Okay?

So if obviously, if there's only two individuals, then we've only got two

signals. Then if we're each making an error, even

an average of those two errors isn't going to give us a very accurate number.

so to get accuracy when people are making errors, we're going to have to average

over a large number. But then, the question is when is it the,

that the overall society can average in a, in a useful manner?

Okay. So, one thing that's very useful is a

variation on the weak law of large numbers, you can prove this easy using

Chebychev's inequality. So let's consider a situation where all

these errors are independent, so there's a bunch of people making independent

errors. each person has a 0 mean in their error,

so they're either above or below. but in expectation, they have a 0, so

they're centered at 0. And they each have finite variance, so

that nobody's infinitely ignorant. then, when we look at, let's suppose that

we were doing some influences. Whatever those influences are, so society

n has a, a vector s1, through sn. So we look at those.

We'll call those the si n's. All right.

So in society n, you've got an s1 through sn, then we have a different one for when

we add this extra person. So each one of these societies has a

vector. the Weak Law of Large Numbers tells us

that if we look at averaging those error. So what's going to happen is the

influence is going to capture how much of each persons error enters into the

overall societal error. The sum of the weights on those errors is

going to be 0. If and only if the largest influence that

anybody has goes to 0, okay. So if anybody retains influence, then

what we're going to do is end up retaining weight on that person's error.

Their, their believe is going to be a non-trivial part of the overall society's

belief. And so, it's going to be necessary that

everybody have a, a, a negligible weight in the limit.

So as society aggregates, they have to disperse so that we're putting weight on

different individuals. If we keep all listening to the same

person, we're going to have inaccuracy. And, actually that's going to be enough,

so as long as society spreads it out >> Given everybody has finite variance,

averaging a bunch of these variables will give us a good answer.

So wise crowds occur if and only if the maximum influence vanishes.

Okay, so thats a nice simple result that than tells us that we are going to get

convergence if and only if. To, to, the right belief if and only if

when we look at this in larger and larger societies, each one of these things is

tending to 0. Right, so we have all the beliefs tend to

0. and the max tending to 0.

6:43

Okay. So what's a sufficient condition for

this. suppose that we look at a situation where

there's, where you actually have reciprocal attention.

So, let's make T not only row stochastic but column stochastic.

So, everybody gives some weight out, but we also get the same weight in.

Everybody is getting, somebody is paying attention to them.

So, that the weight that they have out is, is equal.

Then you would get s equals 1 over n, right.

So that's a situation where T would be wise.

So, if everybody got weighted as much as weight in as they were giving out, we'd

be in good shape. so reciprocal trust would be something

that would imply wisdom. So, making sure that the trust that comes

to in any individual is the same as what goes out, that's a very strong condition,

though. So, generally in society, we're going to

have some heterogeneity in terms of overall, how much somebody gets paid

attention to. And so, what's important is that when

we're looking at this, there's no single individual that's getting too much of the

weight from other individuals who matter. Right?

So there's, if there was some i that had for instance, everything body putting

weight at least to a on them, then their overall influence would be at least a.

So there can't be anybody who gets, you know, too much detention.

So you can't have too strong an opinion leader.

That's going to be an, an obvious condition.

If anybody's getting too much weight in, their eventual belief is going to

influence society. And so the network's going to have to be

that, as it becomes larger and larger it can be that, you know, each individual is

only listening to a few people. So people are getting a lot of weight

from a few neighbors. But it can't be that overall, they're

getting weight from the whole society at a rate of at least a.

So, if there's anybody that's getting too much weight in, then that's going to be

detrimental and you won't end up getting convergence.

Now, you can, you can generalize these kinds of conditions.

So, in the paper with Ben Golub, we give more explicit characterizations of the

conditions. You can't have any group that's too

influential. You have to have some balance across

groups, and as long as things work out to be reasonably balanced, then you end up

with convergence. And if not, then you can end up with the

wrong kinds of beliefs in the limit. Okay, so that takes us through a little

bit of understanding the DeGroot model and learning convergence and so forth.

we'll, we'll wrap up some of our discussion of learning next and then we

can start turning to games on networks.