So in this section let's start discussing sensitivity and specificity. Now a paper you can look at, Sutton and colleagues, they looked at the use of amylase and lipase in the diagnosis of acute pancreatitis. So disease of the pancreas, becomes acutely inflamed. For instance, from the excessive use of alcohol or the presence of a gall stone that blocks the pancreatic duct. Now, they chose a positive test as a lipase that was raised three times more than the upper limit of normal. So every laboratory will have a normal range. Three times the upper limit. If it was higher than that they would call that a positive test as far as the lipase is concerned. And they quoted a sensitivity of 64% and a specificity of 97%. What does that mean? So let's get to the definitions. So as far as sensitivity is concerned, that's the probability. And we usually express it as a percentage of a test result being positive given that the disease really is there. So they quoted 64%. So there was a 64% probability of the test being positive, given that the patient really had acute pancreatitis. Now, that's not so great if you think, 64% that's close to 50, that's almost the toss of a coin. The specificity, is the probability again, expressed in a percentage usually, of a result being negative given the disease really isn't there. So, they quoted 97%. And that's quite good as far as specificity is concerned. But let's explain it in this way. Now imagine we had 1,000 volunteers. And by some gold standard, we know that ten people have a certain disease out of 1,000 and 990 do not have that disease. Now we've got to just stand still a bit and talk about this gold standard. This gold standard is some hypothetical test that is absolutely 100% accurate. And that doesn't really exist, but imagine we could know that a disease really is there and it really isn't there. There's no test that can really do that for us, but imagine there is. Imagine it is a tumor and we can take out a piece, look under a microscope and everyone would agree there's a tumor there. Or there isn't one there. So, imagine there is this gold standard. Usually we do compare it to some other case, but imagine we absolutely knew that 10 patients out of 1,000 did have the disease and 990 did not. Now we introduce some new test. We want to know how accurate this new test of ours is. And we look at the columns there. So for the patients with the disease, some of them will have this new test administered and some will come back positive and negative. If the test comes back positive and we know the patient really has the disease that is a true positive. If that test comes back negative, that's a false negative, because the patient really did have the disease. On the other hand, if we look down that last column, patients who don't have that disease, if they test as a negative, that is a true negative. But if they return a positive result, that's a false positive, because they really did not have the disease. Now, let's add some numbers to this. Now, imagine in our column of ten patients with the disease, nine came back with a positive result. That's true positive, and only one had a negative result. And out of the 990, 90 had a false positive. And 900 had a true negative test. And now, let's calculate the sensitivity. So the sensitivity, remember we say that that is the probability of picking up the disease if it really is there. So that's going to be the nine divided by ten. That's 90%. So if you think about the pancreatitis study they, that is what it meant for them to have that low system. It was only going to pick up so many of the real patients, who really had pancreatitis. On the other hand, specificity is the true negative rate divided by the patients who really don't have that disease. So our example here, that'd be 900 divided by 990, that's 91%. So that is the difference between sensitivity and specificity. In the next section, we're going to get to the positive and negative predictive values. That is where we sit through the results in front of us, and we now have to interpret them.