Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Bias is the silent saboteur of research validity, and biostatistics exams will test whether you can identify when, why, and how different biases distort study findings. You're not just being tested on definitions—you need to understand the underlying mechanisms that introduce systematic error into research. This means recognizing whether a bias affects who gets into a study, how data is collected, or how results are interpreted and shared.
The biases you'll encounter fall into distinct categories based on where in the research process they occur: participant selection, data collection, analysis, and dissemination. Mastering these categories helps you quickly diagnose problems in study design and propose solutions—exactly what FRQs demand. Don't just memorize names—know what stage of research each bias threatens and what strategies prevent it.
These biases occur before data collection even begins. When the sample doesn't accurately represent the target population, external validity collapses—no matter how rigorous the rest of the study.
Compare: Selection bias vs. Sampling bias—both affect who's in your study, but selection bias refers to systematic differences in how participants are chosen or enrolled, while sampling bias specifically involves flawed sampling techniques. If an FRQ describes a convenience sample, think sampling bias first.
These biases corrupt the accuracy of measurements after participants are enrolled. Systematic errors in how exposure or outcome data are gathered lead to misclassification—either differential (varying by group) or non-differential (equal across groups).
Compare: Recall bias vs. Observer bias—both involve subjective distortion, but recall bias originates with participants misremembering, while observer bias originates with researchers misinterpreting. Blinding helps with observer bias; using objective records (rather than self-report) helps with recall bias.
These biases affect how relationships between variables are understood. Even with perfect selection and measurement, failing to account for extraneous variables or time-related artifacts can produce misleading conclusions.
Compare: Confounding bias vs. Information bias—confounding involves a real third variable distorting the exposure-outcome relationship, while information bias involves measurement error in the exposure or outcome itself. Confounding can be addressed in analysis; information bias cannot be fixed after data collection.
These biases occur after studies are completed, affecting what evidence reaches the scientific community. When published literature doesn't reflect all conducted research, systematic reviews and meta-analyses inherit distorted effect estimates.
Compare: Reporting bias vs. Publication bias—reporting bias occurs within a study (selective presentation of outcomes), while publication bias occurs across studies (selective publication of entire studies). Both distort the evidence base, but reporting bias is author-driven while publication bias involves editorial and systemic factors.
| Concept | Best Examples |
|---|---|
| Participant selection problems | Selection bias, Sampling bias, Attrition bias |
| Measurement/data collection errors | Information bias, Recall bias, Observer bias |
| Third-variable distortion | Confounding bias |
| Time-related artifacts | Lead-time bias |
| Dissemination distortion | Reporting bias, Publication bias |
| Mitigated by blinding | Observer bias, Information bias |
| Mitigated by randomization | Confounding bias, Selection bias |
| Threatens external validity | Sampling bias, Selection bias, Attrition bias |
A case-control study finds that mothers of children with autism report higher pesticide exposure than mothers of healthy children. Which two biases could explain this finding, and how would you distinguish between them?
Researchers notice that participants who drop out of a weight-loss trial had higher baseline BMIs than completers. What type of bias does this represent, and how might it affect the study's conclusions?
Compare and contrast confounding bias and information bias: at what stage of research does each occur, and which can be corrected during statistical analysis?
A new cancer screening test shows 5-year survival rates of 85% compared to 60% for unscreened patients. A biostatistician argues this doesn't prove the screening saves lives. What bias is she concerned about, and what alternative measure would provide stronger evidence?
A meta-analysis of antidepressant trials shows that published studies report larger effect sizes than unpublished FDA submissions. Which bias does this demonstrate, and what graphical tool could have detected it?