Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
In biostatistics, you're not just learning a list of study designs—you're learning a toolkit for evaluating evidence. Every research question demands a specific approach, and understanding why researchers choose RCTs over cohort studies, or when case-control designs outperform cross-sectional ones, is fundamental to interpreting health data. Exams will test your ability to match research questions to appropriate designs, identify sources of bias, and evaluate the strength of causal claims.
The key concepts here are directionality (prospective vs. retrospective), researcher control (experimental vs. observational), and evidence hierarchy (which designs support causal inference). Don't just memorize definitions—know what each design can and cannot tell us, and be ready to explain why a researcher would choose one over another.
Experimental studies give researchers control over exposures, allowing them to isolate cause-and-effect relationships. By manipulating variables and randomizing participants, these designs minimize confounding and bias.
Compare: RCTs vs. Quasi-Experiments—both involve intervention, but RCTs use randomization while quasi-experiments do not. If an exam asks about causation with the strongest evidence, RCTs are your answer; quasi-experiments are the fallback when randomization isn't feasible.
When researchers cannot or should not manipulate variables, observational designs allow them to study associations in natural settings. The tradeoff is reduced control over confounding variables.
Compare: Cohort vs. Case-Control—both are analytic observational designs, but cohort studies follow exposure → outcome (prospective logic) while case-control studies work backward from outcome → exposure. FRQs often ask which design is better for rare diseases (case-control) vs. rare exposures (cohort).
The direction of data collection fundamentally affects bias and data quality. Prospective designs collect data as events happen; retrospective designs rely on existing records or memory.
Compare: Prospective vs. Retrospective Cohort—both track exposure → outcome, but prospective cohorts collect data in real-time while retrospective cohorts use existing records. Prospective designs have better data quality; retrospective designs are faster and cheaper.
When individual studies provide conflicting or limited evidence, systematic approaches can synthesize findings across research. Meta-analysis uses statistical methods to pool results and increase precision.
Compare: Meta-Analysis vs. Systematic Review—both synthesize multiple studies, but systematic reviews summarize findings qualitatively while meta-analyses pool data quantitatively. Meta-analyses produce a single effect estimate; systematic reviews may not.
| Concept | Best Examples |
|---|---|
| Strongest causal evidence | RCTs, Experimental Studies |
| Efficient for rare diseases | Case-Control Studies |
| Calculates incidence/relative risk | Cohort Studies (prospective) |
| Calculates prevalence | Cross-Sectional Studies |
| Minimizes recall bias | Prospective Studies |
| Fastest/cheapest approach | Retrospective Studies, Case-Control |
| Studies change over time | Longitudinal Studies, Cohort Studies |
| Synthesizes multiple studies | Meta-Analyses |
A researcher wants to study risk factors for a rare childhood cancer. Which study design is most efficient, and why can't they use an RCT?
Compare cohort studies and case-control studies: which calculates relative risk directly, and which calculates odds ratios? When does the odds ratio approximate relative risk?
A cross-sectional study finds that people who exercise have lower rates of depression. Why can't the researchers conclude that exercise prevents depression?
What distinguishes a prospective cohort study from a retrospective cohort study? Which has better protection against recall bias, and why?
If three RCTs on the same drug show conflicting results, what study design could help resolve the discrepancy, and what limitation should you consider when interpreting its findings?