Written by the Fiveable Content Team • Last updated September 2025
Written by the Fiveable Content Team • Last updated September 2025
Definition
A false positive is a test result that incorrectly indicates the presence of a condition or characteristic when it is actually not present. It is an error in statistical hypothesis testing where a null hypothesis is rejected despite being true.
5 Must Know Facts For Your Next Test
A false positive can lead to unnecessary further testing, treatment, or anxiety for the individual being tested.
The probability of a false positive is controlled by the significance level (α) chosen for the statistical test.
Increasing the significance level (α) decreases the chance of a Type I error (false positive) but increases the chance of a Type II error (false negative).
False positives are more likely to occur when the prevalence of the condition being tested for is low in the population.
Strategies to reduce false positives include using more specific tests, increasing the sample size, or adjusting the significance level.
Review Questions
Explain the relationship between a false positive and a Type I error.
A false positive is the result of a Type I error in statistical hypothesis testing. A Type I error occurs when the null hypothesis is true, but is incorrectly rejected, leading to a false positive result. In other words, a false positive is the incorrect identification of a condition or characteristic that is actually not present, which is the consequence of a Type I error. The probability of a false positive is directly related to the significance level (α) chosen for the statistical test, as a higher α increases the chance of a Type I error and, consequently, a false positive.
Describe how the prevalence of a condition in the population can affect the likelihood of false positives.
The prevalence of the condition being tested for in the population is a key factor in determining the likelihood of false positives. When the prevalence of a condition is low, the positive predictive value of the test decreases, meaning that a larger proportion of positive test results will be false positives. This is because even a test with high sensitivity and specificity will still produce some false positives when the condition is rare in the population. Consequently, false positives are more likely to occur when testing for conditions with low prevalence in the population.
Evaluate the trade-off between reducing false positives and increasing false negatives when adjusting the significance level (α) of a statistical test.
The significance level (α) of a statistical test directly affects the balance between false positives and false negatives. Decreasing the significance level (α) reduces the probability of a Type I error, which in turn decreases the rate of false positives. However, this also increases the probability of a Type II error, leading to a higher rate of false negatives. Conversely, increasing the significance level (α) reduces the chance of false negatives but raises the likelihood of false positives. This trade-off must be carefully considered when choosing the appropriate significance level for a statistical test, as the consequences of false positives and false negatives can vary depending on the context and the relative importance of avoiding each type of error.
Statistical significance refers to the probability of obtaining a result as extreme or more extreme than the observed result, given that the null hypothesis is true.
Sensitivity: Sensitivity is the ability of a test to correctly identify those with the condition, and is related to the rate of false positives.