So in this section let's move on to positive and negative predictive values. Now look at the article Sutton and colleagues looking again at Amylase and Lipase for the diagnosis of acute pancreatitis. And again, I remind you the positive test was a Lipase raised more than three times the upper limit of their reference range, and they found a positive predictive value of only 3.8% and a negative predictive value of 99.3%. So now we looking at something completely different from the values that we saw for sensitivity and specificity. Now we have the result back and we've got to interpret that result. So a positive predictive value, that is really when we look at what is the probability of a patient actually having these given a positive result. In a negative predictive value we're looking at the probability of the patient not having that disease given that the test result was negative. So, how do we interpret a positive test result and a negative test result. So, look at the positive predictive value, the probability of having that disease, when the test is positive, it was 3.8%. What does that mean? It means that 3.8% of the patients who had a level of Lipase raised more than three times the normal limit actually had pancreatitis and that is quite a low value there. If the result came back and the Lipase was not raised, 99.3% of those patients did not have the disease so you can clearly see how we look at positive and negative predictive values. So let's go back to our example, we had a thousand volunteers and by some magic we knew that 10 had the disease and 990 did not. So we know if they had the disease the test result came back positive, that was a true positive. They did not have the disease, came back negative, true negative. And we know what the false positives and false negatives were. So let's just look across the rows now. If we added all the patients that had positive results, that's 9 without the disease and then 90 without the disease, that gives us 99 and the negative result is so 901 negative results, we're gonna use that to work out our positive and negative predictive values. And that's how we do it. You see the calculation, they're very easily, the positive predictive value is the true positive weight divided by everyone that had a positive result. And the negative predictive value Is the two negatives divided by everyone who had a negative result. So we see a positive predictive value there of only 9.1%. And remember in the example of the article I talked about, it was very low. Less than 4% and you see here, you can start to intuitively understand why the positive predictive value is so low. It is because the prevalence of the disease is so low. In our instance here it was only ten out of 1,000 people, it was only 1% of the population. And we really talk about this a lot when we talk about mammography for instance. When a mammogram comes back suggestive of cancer, everyone is obviously concerned. But, if we think what the prevalence is, of breast cancer, at in the population who comes for screening. It's probably quite low, so when a test result does come back positive, we've got to interpret in that kind of language, what is the chance of that, really, then being a cancer. So let me show you how sensitive things really are to this prevalence. So now imagine we do the following. We've just had an example of 1,000 people but with a prevalence of only 1% of the disease. Let's increase that massively, let's suggest that there's a 40% prevalence of the disease. And we change the numbers around. And you see how I've changed them around so that sensitivity and specificity of our test is exactly the same. It hasn't changed. If you work out 360 divided by 400 or 545 divided by 600 you're going to get exactly the same sensitivity and specificity. But because the prevalence has changed, look at the calculations now for positive and negative predictive range. Now we jump up to positive predictive value. If the test comes back as positive, what is the likelihood that the patient really has the disease, jumps up from 9.1% to 87%. Just because the prevalence of the disease was so high, and the negative predictability, well that's going to fall slightly from in the 99s to down to 93%. So you see the importance that positive and negative predictive values have. What effect prevalence has on that. So, what does it mean to you? If you read a journal article, and there are quotes for negative and positive predictive values look carefully of the sample that was included in that research. What was the prevalence of the disease in that sample versus what the prevalence of the disease is in the patient population that you serve from your community. If there is a big difference between the prevalence of the sample and the prevalence of your population you cannot use those predictive values. Now, fortunately there are equations which you can use to change between different prevalences. So it's the same test, and we can just normalize things so we can compare apples to apples to correct this and usually in the literature that might be done. Where you can see that. And you can extrapolate the results is in the literature to the population that you see. So be very careful of positive and negative vector values, they are highly sensitive to the prevalence of the disease that occurred in that sample that came as part of that research project. So, there you have it, sensitivity and specificity, positive and negative predictive values.