Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
When you're tested on AI ethics, you're not just being asked to define bias—you're being asked to demonstrate that you understand where bias enters AI systems, why it persists, and how different bias types compound to create unfair outcomes. Exam questions frequently require you to trace a discriminatory AI decision back to its root cause, whether that's flawed training data, problematic algorithm design, or human overreliance on automated outputs. Understanding the mechanism behind each bias type is what separates surface-level memorization from genuine comprehension.
These bias types don't exist in isolation. A single AI system can exhibit historical bias embedded in its training data, algorithmic bias in how it weights features, and automation bias in how humans interpret its outputs. The most challenging exam questions—especially FRQs—will ask you to identify multiple bias types operating simultaneously and explain their interaction. Don't just memorize definitions; know what stage of the AI pipeline each bias affects and what real-world harms it produces.
The foundation of any AI system is the data it learns from. When that data reflects historical inequalities, excludes certain populations, or captures the world inaccurately, the resulting model inherits those flaws as features, not bugs.
Compare: Historical bias vs. Representation bias—both involve problematic training data, but historical bias stems from when data was collected (reflecting past discrimination), while representation bias concerns who is included or excluded. An FRQ might ask you to identify which bias type explains why an AI performs differently across demographic groups.
Even with good intentions, how data is gathered can introduce systematic errors. These biases emerge before the algorithm ever sees the data.
Compare: Sampling bias vs. Selection bias—sampling bias results from how a sample is drawn (methodology), while selection bias results from who gets systematically excluded (often due to structural factors). Both produce non-representative data, but the causes and solutions differ.
Even with perfect data, the choices made in building and deploying algorithms can introduce or amplify unfairness. The algorithm itself is a site of ethical decision-making.
Compare: Algorithmic bias vs. Data bias—algorithmic bias originates in the model's design and logic, while data bias originates in the training information. A biased algorithm can produce unfair outcomes even with representative data, and vice versa. If an FRQ asks about interventions, specify whether you're addressing the algorithm, the data, or both.
These biases emerge not from the AI system itself, but from how humans build, interpret, and rely on automated systems. They highlight the irreducibly human dimensions of AI ethics.
Compare: Confirmation bias vs. Automation bias—confirmation bias affects how humans build and interpret AI systems, while automation bias affects how humans defer to AI outputs. Both involve cognitive shortcuts, but confirmation bias operates throughout development while automation bias operates at the point of deployment and use.
| Concept | Best Examples |
|---|---|
| Training data problems | Historical bias, Representation bias, Data bias |
| Data collection flaws | Sampling bias, Selection bias, Measurement bias |
| Algorithm design issues | Algorithmic bias, Reporting bias |
| Human-AI interaction | Confirmation bias, Automation bias |
| Perpetuates past discrimination | Historical bias, Data bias |
| Excludes or underrepresents groups | Representation bias, Sampling bias, Selection bias |
| Requires human oversight solutions | Automation bias, Confirmation bias |
| Affects high-stakes decisions | Algorithmic bias, Automation bias, Historical bias |
A facial recognition system performs well on light-skinned faces but poorly on dark-skinned faces because the training dataset contained mostly light-skinned individuals. Which two bias types best explain this outcome, and how do they differ?
An AI hiring tool was trained on a company's historical hiring decisions, which favored male candidates. A recruiter notices the tool ranks women lower but approves its recommendations anyway. Identify the bias types present at each stage of this scenario.
Compare and contrast sampling bias and selection bias. How might each produce a non-representative training dataset, and what different interventions would address each?
A healthcare AI trained on data from urban hospitals makes inaccurate predictions for rural patients. Is this primarily a measurement bias, representation bias, or algorithmic bias problem? Defend your answer.
If an FRQ asks you to explain how a single AI system can exhibit multiple bias types simultaneously, which three bias types would you choose to demonstrate the interaction between data, algorithm, and human factors?