In the field of diagnostic research, the concepts sensitivity, specificity, positive predictive value and negative predictive value all provide valuable information on the predictive qualities of a diagnostic test. In this article we explain these concepts in more detail.

Sensitivity and specificity

Overall accuracy: (TP + TN) / (TP + TN + FP + FN)
Overall accuracy: (TP + TN) / (TP + TN + FP + FN)

The overall accuracy of a test refers to the proportion of true classifications. It is expressed as a numerical value between 0 and 1, with a score of 1 indicating a perfect test accuracy. If you have a group of patients, and both the disease status and test outcome for every patient are known, we can use the following table to fill in the total number of patients per category and then calculate the overall accuracy of a test:


Sensitivity: TP / (TP + FN)
Sensitivity: TP / (TP + FN)

Sensitivity: A term used in diagnostic assessments. Sensitivity refers to the proportion of positive cases that are rightfully identified as such (e.g., the percentage of sick people who are rightfully identified as having the condition). In other words, given the fact that a person has a disease, sensitivity refers to the chance a diagnostic assessment tool will correctly indicate that the disease is present.

 

Specificity: TN / (TN + FP)
Specificity: TN / (TN + FP)                      

Specificity: Specificity refers to the proportion of negative cases that are rightfully identified as such (e.g., the percentage of healthy people who are rightfully identified as not having the condition). In other words, given the fact that a person does not have a disease, specificity refers to the chance a diagnostic assessment tool will correctly indicate that the disease is absent.

 

Clinical practice: positive predictive value and negative predictive value

In clinical practice, a diagnostic assessment tool is generally used when both patient and healthcare professional do not know the actual disease status of the patient; finding out the disease status is often the sole reason for performing the diagnostic assessment to begin with. While sensitivity and specificity offer us the probability of a certain test result when given the disease status, in clinical practice we are more often interested in the opposite direction. Generally we want to know the probability of having a disease when given a certain test result. For this purpose, the concepts of positive predictive value and negative predictive value are more informative than sensitivity and specificity.

Positive predictive value: TP / (TP + FP)
Positive predictive value: TP / (TP + FP)

Positive Predictive Value (PPV): The positive predictive value refers to the proportion of positive tests that prove to be accurate. Given the fact that a test score is positive, the positive predictive value refers to the chance of the patient actually having the disease. By filling in the total number of patients per category in the table to the right, we can calculate the positive predictive value through the provided formula.

 

Negative predictive value: TN / (TN + FN)
Negative predictive value: TN / (TN + FN)

Negative Predictive Value (NPV): The negative predictive value refers to the proportion of negative tests that prove to be accurate. Given the fact that a test score is negative, the negative predictive value refers to the chance of the patient actually being disease-free. By filling in the total number of patients per category in the table to the right, we can calculate the positive predictive value through the provided formula.


Example 1:
Imagine we have developed a new, relatively simple screening instrument for Alzheimer’s disease. For the sake of argument we argue that the gold standard for assessing the presence of Alzheimer’s disease is histological identification of amyloid plaques and neurofibrillary tangles within the medial lobe of patients. We compare the outcomes of our new, simple screening test with the outcomes of the gold standard method. The outcomes are displayed in the following table.

Example 1
Example 1

We can now calculate the sensitivity, specificity, positive predictive value and negative predictive value for our new diagnostic test:
– Sensitivity: 24 / (24 + 10) = 0.71
– Specificity: 160 / (160 + 17) = 0.90
– Positive predictive value: 24 / (24 + 17) = 0.59
– Negative predictive value: 160 / (160 + 10) = 0.94

 

Disease prevalence and its implications for diagnostic assessments

Sensitivity and specificity are stable test characteristics that do not depend on the prevalence of a disease within the population. Positive and negative predictive values however, are influenced by the prevalence of a disease in the population that is being examined. We will illustrate this with the following example.

Example 2: Imagine there is an Ebola pandemic in West Africa, and due to fear of the virus also spreading in Europe, a large scale screening campaign is being initiated. The aim of this screening campaign is to examine all individuals in Europe who have resided in West Africa in the last 4 months. For this purpose we have developed a new and simple diagnostic screening instrument. We have identified a large number of people who have been to West Africa in the past 4 months, and their screening results and actual disease status are shown in the table below. We can see that the prevalence of Ebola amongst the examined population is very low, with 4 out of 24880 people being afflicted by the disease.

Example 2
Example 2

Let us now look at the sensitivity, specificity, positive predictive value and negative predictive value of our diagnostic screening instrument:
Sensitivity: 4 / (4 + 0) = 1.00 = 100%
Specificity: 24726 / (24726 + 150) = 0.995 = 99.5%
Positive predictive value: 4 / (4 + 150) = 0.026 = 2.6%
Negative predictive value: 24726 / (24726 + 0) = 1.00 = 100%

This example illustrates that when the prevalence of a disease is very low, this can contribute to a screening test performing poorly with regard to positive predictive value. Even though the test in the example displays an extremely high sensitivity and specificity, only 2.6% of the people who test ‘positive’ actually have the disease. And while positive predictive value is often limited when the prevalence of a disease is very low, similarly negative predictive value is usually limited when the prevalence of a disease is very high.