So a quick summary of the performance evaluation section. I just wanna emphasize two main points. Number one, understand you environment. One way to think about this is the distinction between the two sets of curves we showed you early in this module. How noisy is the environment relative to the signal that you observe in the performance evaluation. We gave you two examples, one of which quite a bit of difference between high effort and low effort, so it's relatively easy to draw the signal from the noise. And conversely there's, it could be that there's not much difference between high effort and low effort, so it's much harder to find the signal and the noise. And a qualitative way to think about this is you might characterize an environment as being relatively chance based, more like a lottery or more skill based? More like a math problem. And then ask yourself how? Most are mixed by the way. So you should be asking yourself how much is my environment, or how much is the environment of the employees I'm evaluating like a math problem and how much of it is like a lottery? A trick here though is that you need to also understand that you're biased. We had these biases in making these assessments. So, we touched on them briefly but we tend to make non-regressive predictions. We don't understand regression to the mean and so we forecast too directly from things that have happened in the past. We tend to believe. The outcomes reflect the underlying quality or efforts with this Outcome bias. We tend to believe that we knew something was gonna happen before it happened, even though we didn't, this is Hindsight bias. And we tend to tell narratives that make sense of all the events that we observe. All of these things get in the way of our appreciating the role that chance plays. And that means that they will buy us our understanding of the environment. It will lead us to believe that our environments are more like a math problem than they actually are. It will lead us to underestimate the extent to which our environments are a little bit like a lottery. All of those are problems for the way that we infer the effort level of our employees, the quality of our employees, from our performance measures. The key of course, the trick is to account for chance, and the key there is persistence. You need to find the most fundamental performance measure possible. That means the most skill-related. And the most skill related, the most fundamental will be the most persistent overtime. So we gave you an extent of example of how we did the NFL draft. But the key tactic is to take these split samples and ask are the differences we observe in one period there in the second period? That's the key test for persistence, that's the key test for true skill, true signal as opposed to chance. And finally, some critical questions. You can think of this as a battery you can take to any performance evaluation meeting, to any budget meeting, resource allocation meeting. Anything that turns on judgements about an employee or technology, some kind of performance evaluation, take these questions. Are the differences persistent or random? How do we know that this isn't just good or bad luck? Is the sample large enough to draw strong conclusions? How can we make the sample larger? How many different signals are we really tapping into here? How can we make them as independent as possible? And then finally, what else do we care about? Are we measuring enough? What can we measure that's actually more fundamental than what we're measuring right now? The keys persistence, large samples, independent assessments, and more fundamental measures, more process measures. You can take these critical questions to any evaluation and improve the decision making process.