A third issue that we want to emphasize in performance evaluation is signal

independence.

And to do this I want to begin with this notion of the wisdom of crowds

which has become very popular in recent years.

The name comes from a book by James Surowiecki.

He draws on research that's been going on for probably 100 years.

But the fact that Surowiecki wrote this book led many academics to do

more research in this area.

And it has become a very hot area of research.

The basic observation is that the average of a large number of forecasts

reliably outperforms the average individual forecast.

So the motivating example are these historical county fairs.

Where you might have a large cow, for example, in a pen.

And everyone who comes to the fair gets to guess the weight of the cow.

And the interesting bit, the fascinating bit really,

is that even though most people were quite wrong in their guesses.

The average of their guesses is remarkably accurate.

And this has been shown in county fairs, but also in many other domains, that

the result of all these bets is that the idiosyncratic errors offset each other.

I might be a little bit high.

You might guess a little bit low.

If there's enough of you and

me making these guesses then we tend to get very close to the truth.

So this has been studied in many domains now, and shown in many places, and

it provides a way to get closer to the truth by getting more signals.

Getting more people involved.

Getting more judgements from more people.

This is something that some firms do in performance evaluation, and

more firms should do more of.

But there's one very important caveat, and that is that the value of the crowd

critically depends on the independence of their opinions.

So if, for example in the county fair if, everyone who walked up to the booth

talked with each other before they made their guess,

the value to the crowd would greatly diminish.

Those idiosyncratic errors would be a little bit less idiosyncratic.

Now they'd be related to each other.

If one person was loudly arguing, that he knew because of the breed of cow this was

and the last cow he saw, that this was a particular weight.

He would influence everybody's opinion, their opinions wouldn't be independent,

and the value of those independent signals would be washed away.

So it's not merely a crowd that you need, you need a crowd that is,

to the extent possible, independent of one another.

So independence here means uncorrelated.

If the opinions are actually correlated,

then the value of each additional opinion quickly diminishes.

So if you think you have the opinion of 100 people, if they're actually highly

correlated, then you might only have the opinion of 5 or 10.

It's striking how quickly you rob

a crowd of its value when those opinions are correlated.

Here's a chart of that, this comes from Bob Clemen and Bob Winkler, down at Duke.

1985 study where they just kind of worked out mathematically,

what's the equivalent number of experts, for a given number of experts, and

the degree to which those experts are correlated.

So, here's what they found.

If the correlation is 0, in other words if the experts are perfectly independent,

then every expert you add, creates that much new value.

So the equivalent number of independent experts is the same.

But when those expert opinions become correlated even at 0.2 which is a pretty

low correlation, even at 0.2 you quickly lose the value of adding experts.

So, for example, in this chart it shows that as you go from 0 experts to 9,

if you have a correlation of 0.2, you never quite get above 4.

You asymptote there where you have 9 judges, but because there's a little

correlation between them, the effective number of judges is only 3, 3 and a half.

If the correlation is 0.4 it plateaus much more around 2.

If the correlation is 0.8,

you're not getting much more than 1 opinion even though you've got 9 experts.

This should be very sobering to us.

So we want to push you towards crowds.

We want to push you towards more opinions,

more assessments because that's going to help.

But you've got to simultaneously try to maintain the independence.

Of those opinions.

In a recent study by Florian Zimmerman and colleagues from Zurich,

they found that even when you tell people the correlation.

Even when you tell them the exact structure of where the correlation is,

how strong the correlation is, people don't properly adjust.

It is sobering to think about how people deal with