Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
In AP Research, your ability to design a credible study—and critique the studies of others—hinges on understanding what can go wrong. Validity threats are the hidden landmines that can undermine even the most carefully planned research. When you evaluate sources for your literature review or defend your own methodology, you're being tested on whether you can identify internal validity issues (did the intervention actually cause the effect?) and external validity issues (can these findings apply beyond this specific study?). These concepts appear throughout the AP Research framework, from source evaluation to your own methodological transparency.
Here's the key insight: validity threats aren't just a checklist to memorize. They represent fundamental principles about why research can mislead us—whether through flawed participant selection, uncontrolled variables, or measurement inconsistencies. When you encounter these threats in your own work or in published studies, you need to recognize the underlying mechanism and explain how it compromises conclusions. Don't just memorize the terms—know what type of validity each threat attacks and how researchers attempt to control for it.
These threats stem from who participates in your study and how they change over time. The core principle: your participants must accurately represent the population you're studying, and any changes in them must be attributable to your intervention—not external factors.
Compare: Selection Bias vs. Attrition—both create non-representative samples, but selection bias occurs before data collection while attrition occurs during the study. If an FRQ asks about threats to generalizability, consider whether the problem started at recruitment or emerged later.
These threats arise from events or changes that occur during your study period. The core principle: anything happening alongside your intervention could be the real cause of observed effects.
Compare: History vs. Maturation—both involve changes over time, but history refers to external events while maturation refers to internal developmental changes in participants. When critiquing longitudinal research, ask: is this change coming from outside or inside the participant?
These threats emerge from how you collect and measure data. The core principle: your measurement tools must remain consistent and accurate throughout the study, or variations in your data may reflect instrument problems rather than real effects.
Compare: Instrumentation vs. Testing Effects—both involve measurement issues, but instrumentation is about changes in your tools while testing effects are about changes in participants due to measurement exposure. One is a researcher problem; the other is a participant response.
These threats stem from the human element in research—how expectations and social dynamics distort authentic responses. The core principle: both researchers and participants can unconsciously (or consciously) behave in ways that skew results toward expected outcomes.
Compare: Experimenter Bias vs. Demand Characteristics—both involve expectations distorting results, but experimenter bias comes from the researcher's behavior while demand characteristics come from participants' behavior. Double-blind designs address the former; careful study design addresses the latter.
This category addresses the fundamental challenge of isolating cause and effect. The core principle: if other variables could explain your results, you cannot claim your independent variable caused the observed changes.
Compare: Confounding Variables vs. History—both introduce alternative explanations, but confounds are variables that systematically vary with your IV while history refers to discrete external events. Confounds are ongoing; historical threats are time-bound occurrences.
| Validity Concept | Key Threats |
|---|---|
| Internal validity (causation) | Confounding variables, History, Maturation, Selection bias |
| External validity (generalizability) | Selection bias, Attrition, Demand characteristics |
| Measurement reliability | Instrumentation, Testing effects, Regression to the mean |
| Researcher objectivity | Experimenter bias, Instrumentation (observer drift) |
| Participant authenticity | Demand characteristics, Testing effects, Attrition |
| Longitudinal design risks | Maturation, History, Attrition, Instrumentation |
| Pre-test/post-test design risks | Testing effects, Regression to the mean, Maturation |
| Statistical interpretation | Regression to the mean, Confounding variables |
A researcher selects students who scored in the bottom 10% on a math assessment for an intervention, then celebrates when their post-test scores improve. Which two validity threats most likely explain this "improvement" without any real intervention effect?
In a six-month study on a new teaching method, students in the treatment group show significant gains while control group students show modest gains. A major education policy change was announced midway through the study. How would you distinguish between history, maturation, and the actual intervention effect?
Compare and contrast experimenter bias and demand characteristics: What do they have in common, and what strategies address each one differently?
You're evaluating a published study for your literature review. The researchers used a convenience sample of college students and lost 40% of participants before the study concluded, with most dropouts being students who reported struggling with the material. Identify the validity threats and explain how they limit the study's conclusions.
An FRQ asks you to design a study that minimizes threats to internal validity. Which three validity threats would you prioritize addressing, and what specific methodological choices would you make to control for each?