Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
In causal inference, you're rarely working with perfect data—unmeasured confounding lurks behind nearly every observational study, threatening to invalidate your conclusions. Sensitivity analysis methods give you the tools to ask the critical question: How wrong could I be? Rather than pretending confounders don't exist, these techniques let you quantify exactly how strong an unmeasured variable would need to be to overturn your findings. You're being tested on your ability to defend causal claims under uncertainty, and that means knowing when your estimates are robust versus when they're hanging by a thread.
These methods connect directly to core causal inference principles: the ignorability assumption, selection bias, instrumental variable validity, and mediation pathways. Each sensitivity analysis approach addresses a specific vulnerability in your causal argument. Don't just memorize what each method does—understand which assumption it stress-tests and what kind of study design it applies to. When an exam question asks you to evaluate the credibility of a causal claim, your answer should include how you'd probe its weaknesses.
These methods answer a direct question: How strong would an unmeasured confounder need to be to explain away my results? They translate abstract concerns about bias into concrete, interpretable quantities.
Compare: E-value vs. Cornfield conditions—both assess confounder strength requirements, but E-values give a single interpretable number while Cornfield conditions provide a set of inequalities for logical argument. Use E-values for quick communication; use Cornfield logic when defending against specific proposed confounders.
When you've used matching or stratification to control observed confounders, these methods ask: What if I missed something? They're designed specifically for studies where you've already done the work of balancing covariates.
Compare: Rosenbaum bounds vs. Tipping point analysis—Rosenbaum bounds work within the matched-pairs framework using , while tipping point analysis is more general and visualizes the full space of threatening confounders. For FRQs on matched designs, lead with Rosenbaum; for general robustness arguments, tipping points are more intuitive.
Different causal identification strategies have different vulnerabilities. These methods stress-test the specific assumptions that make each design work.
Compare: IV sensitivity vs. RD sensitivity—IV analysis worries about the exclusion restriction (does the instrument have sneaky direct effects?), while RD analysis worries about continuity and manipulation (is the cutoff truly as-if random?). Both ask "what if my identifying assumption is slightly wrong?" but target completely different assumptions.
Sometimes you face multiple potential biases simultaneously—selection, measurement error, unmeasured confounding. These methods handle the messy reality of real-world data.
Compare: Multiple bias modeling vs. Probabilistic bias analysis—multiple bias modeling handles several bias types but typically uses fixed parameter values, while probabilistic bias analysis embraces uncertainty by using distributions. Combine them for the most honest assessment: model multiple biases, each with its own probability distribution.
| Concept | Best Examples |
|---|---|
| Confounder strength quantification | E-value, Cornfield conditions, Bounding factor |
| Matched design sensitivity | Rosenbaum bounds, Tipping point analysis |
| Instrumental variable robustness | IV sensitivity analysis |
| Regression discontinuity validity | RD sensitivity analysis (bandwidth, manipulation) |
| Mediation pathway robustness | Mediation sensitivity analysis |
| Multiple simultaneous biases | Multiple bias modeling, Probabilistic bias analysis |
| Communicating robustness to non-experts | E-value, Tipping point analysis |
| Incorporating prior knowledge | Probabilistic bias analysis |
You've conducted a matched observational study and found a significant treatment effect. Which sensitivity analysis method would you use to determine how much hidden bias could exist before your result becomes non-significant, and what parameter would you report?
Compare the E-value method and Rosenbaum bounds: What type of study is each best suited for, and what does each method's output tell you about unmeasured confounding?
A researcher using instrumental variables is worried that their instrument might have a small direct effect on the outcome. Which sensitivity analysis approach addresses this concern, and what assumption is being tested?
How does probabilistic bias analysis differ from traditional sensitivity analysis that uses fixed bias parameters? When would you prefer one approach over the other?
FRQ-style: You're reviewing a mediation analysis claiming that a job training program improves earnings primarily through increased self-efficacy. What specific unmeasured confounding threat does mediation sensitivity analysis address, and why can't randomization of treatment alone solve this problem?