Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Decision-making sits at the heart of cognitive science because it reveals how the mind transforms information into action. When you study these concepts, you're being tested on your understanding of cognitive architecture, bounded processing, heuristic reasoning, and the interplay between emotion and rationality. Every model and bias in this guide connects back to fundamental questions about how humans represent problems, weigh alternatives, and commit to choices under uncertainty.
Don't just memorize definitions. Know what each concept demonstrates about the mind's capabilities and limitations. An exam question might ask you to compare rational models with heuristic approaches, or explain why prospect theory challenges classical economic assumptions. The goal is to understand the mechanisms behind decision-making, not just label them. When you can explain why anchoring distorts judgment or how satisficing reflects cognitive constraints, you've mastered the material.
These foundational frameworks establish the theoretical ideals against which real human decision-making is measured. Classical approaches assume logical processing and optimal outcomes, while bounded models acknowledge the mind's computational constraints.
This is the normative ideal for how decisions should be made. It assumes decision-makers have complete information, unlimited processing capacity, and clearly ordered preferences.
The model follows a step-by-step structure:
Nobody actually decides this way in practice. The model's real purpose is as a benchmark: cognitive scientists compare actual human performance against it to reveal systematic departures from rationality.
Herbert Simon introduced this concept to capture what the rational model misses. Real decision-makers face cognitive limitations, time pressure, and incomplete information. They can't evaluate every option or compute optimal solutions.
Because of these bounds, people develop adaptive strategies fitted to their decision environment. Rather than applying a single universal algorithm, the mind adjusts its approach based on context: how much time is available, how much is at stake, and what information is accessible.
The term combines "satisfy" and "suffice." Instead of comparing all alternatives to find the best one, a satisficer picks the first option that clears a minimum acceptability threshold.
This strategy is adaptive under constraints. When time, cognitive resources, or information access is limited, exhaustive search becomes impractical or even counterproductive. Satisficing challenges optimization assumptions by showing that "good enough" outcomes often serve decision-makers better than costly searches for the absolute best.
Compare: Rational Decision-Making vs. Bounded Rationality: both aim for good outcomes, but rational models assume unlimited processing while bounded rationality acknowledges cognitive constraints. If a question asks about "real-world departures from optimal choice," bounded rationality is your framework.
Heuristics are fast, efficient mental rules that reduce complex problems to manageable judgments. They work well in many contexts but create predictable errors when applied inappropriately.
Heuristics are mental shortcuts that trade accuracy for speed, enabling rapid decisions without exhaustive analysis. The key insight from Kahneman and Tversky's research program is that the errors heuristics produce aren't random. They're systematic and predictable, which tells us something important about the architecture of cognitive processing itself.
When a heuristic gets applied to a problem outside its effective range, the result is a consistent, directional bias rather than random noise.
The first piece of information you encounter on a topic disproportionately influences your subsequent judgment. That initial value acts as a reference point, and any adjustments you make from it tend to be insufficient.
What makes anchoring striking is that even completely arbitrary anchors (like spinning a wheel to generate a random number) can bias estimates of unrelated quantities. The effect is also robust across expertise levels: professionals in negotiation, medicine, and law show anchoring effects despite their domain knowledge.
Once you hold a belief, you tend to preferentially seek, interpret, and recall information that supports it. There's an asymmetry in how you evaluate evidence: disconfirming information gets scrutinized much more critically than confirming information.
This bias affects scientific reasoning and everyday judgment alike. It's one of the main reasons exposure to diverse perspectives matters for accurate belief updating.
Compare: Anchoring Effect vs. Confirmation Bias: anchoring distorts judgment through initial information exposure, while confirmation bias distorts it through selective information processing. Both show how the sequence and selection of information shapes conclusions.
How people evaluate uncertain outcomes reveals systematic departures from expected utility theory. The subjective experience of gains and losses, not objective values, drives risky choice.
Kahneman and Tversky developed prospect theory as an alternative to expected utility theory. The central shift: people evaluate outcomes as changes from a reference point rather than as final states of wealth.
Two features make this theory powerful:
This is why prospect theory challenges classical economics. Preferences aren't stable properties of the decision-maker; they're constructed in the moment based on how the problem is presented.
Cognitive dissonance is the psychological tension that arises when your beliefs, attitudes, or behaviors conflict with one another. Festinger's classic theory explains that this discomfort motivates change: you'll adjust your beliefs or rationalize your behavior to restore internal consistency.
A common example is post-decision rationalization. After committing to a choice, people tend to inflate the positives of their chosen option and downplay the positives of rejected alternatives, rather than objectively reassessing.
Compare: Prospect Theory vs. Cognitive Dissonance: prospect theory explains how framing affects choice before decisions, while cognitive dissonance explains attitude change after decisions. Both reveal that preferences aren't fixed but context-dependent.
Not all decisions involve deliberate analysis. Intuitive processing operates automatically and rapidly, drawing on pattern recognition and accumulated experience.
Intuitive decisions are fast, automatic, and experience-based. They rely on pattern recognition from extensive domain exposure rather than explicit step-by-step reasoning.
In the dual-process framework, intuition corresponds to System 1 processing: quick and effortless, but prone to systematic biases. System 2, by contrast, is slow, deliberate, and analytical.
A critical nuance: expert intuition can be highly accurate, but only in high-validity environments where patterns are stable and feedback is clear (think chess or firefighting). In unpredictable domains with noisy feedback (like long-term political forecasting), intuition tends to be unreliable.
Emotions aren't just noise that interferes with good decisions. They can serve as an information source, providing rapid assessments of situations that guide adaptive choices.
Damasio's somatic marker hypothesis proposes that bodily states associated with past outcomes influence current decisions before conscious deliberation kicks in. You might get a "gut feeling" about a bad option because your body has learned to associate similar situations with negative outcomes.
There's also an interpersonal dimension: reading others' emotional states helps you anticipate responses and navigate social decisions more effectively.
Compare: Intuitive Decision-Making vs. Rational Decision-Making: intuition excels when time is limited and patterns are recognizable, while rational analysis excels when stakes are high and systematic comparison is feasible. Knowing when to use each is itself a metacognitive skill.
Decisions rarely occur in isolation. Group dynamics, moral considerations, and stakeholder impacts add layers of complexity beyond individual cognition.
Groups can improve decision quality by pooling information and surfacing objections that individuals might miss. But groups also introduce risks.
Groupthink occurs when cohesion and conformity pressure suppress dissent, leading to premature consensus. The group converges on a decision not because it's the best option, but because nobody wants to rock the boat.
Whether a group outperforms its individual members depends on facilitation quality, communication norms, and whether the group genuinely aggregates diverse judgments or simply defers to the loudest voice.
Moral reasoning draws on several frameworks:
Stakeholder analysis requires identifying everyone affected by a decision and weighing competing interests. And the concept of bounded ethicality suggests that cognitive limitations and self-serving biases affect moral judgment just as they affect other types of decisions. People don't always fail to be ethical on purpose; sometimes their cognitive constraints get in the way.
Compare: Group Decision-Making vs. Individual Decision-Making: groups access more information but face coordination costs and conformity pressures. Understanding when groups outperform individuals (and vice versa) is a key exam topic.
These structured approaches formalize decision processes, making complex choices more tractable and transparent. They translate cognitive tasks into explicit procedures.
Decision trees visually map sequential choices. Branches represent decision points, chance events, and outcomes with associated probabilities.
At each node, you can calculate expected value: where is the probability of each outcome and is its value. This lets you compare different paths through the tree.
The real power of decision trees is decomposition: they break a complex decision into smaller, more manageable components that you can analyze one step at a time.
SWOT is a four-quadrant framework that maps internal factors (Strengths, Weaknesses) against external factors (Opportunities, Threats). It's a strategic alignment tool that connects what an organization can do to what the environment demands.
SWOT is qualitative rather than quantitative, making it useful for structuring initial problem representation before moving to more precise analytical methods.
Cost-benefit analysis systematically compares expected costs and benefits to identify the option with the greatest net value. The decision rule is straightforward: choose the option where is maximized.
The main challenge is monetization. Diverse outcomes need to be expressed in common units for comparison, and some values (human safety, environmental impact, quality of life) resist easy quantification.
Compare: Decision Trees vs. Cost-Benefit Analysis: decision trees handle sequential uncertainty and branching outcomes, while cost-benefit analysis compares discrete alternatives. Decision trees are better when timing and contingencies matter; cost-benefit analysis works for straightforward comparisons.
| Concept | Best Examples |
|---|---|
| Cognitive Constraints | Bounded Rationality, Satisficing, Heuristics |
| Systematic Biases | Anchoring Effect, Confirmation Bias, Cognitive Dissonance |
| Risk and Framing | Prospect Theory, Loss Aversion, Reference Dependence |
| Dual-Process Thinking | Intuitive Decision-Making, Emotional Intelligence |
| Social Factors | Group Decision-Making, Groupthink, Ethical Decision-Making |
| Analytical Methods | Decision Trees, SWOT Analysis, Cost-Benefit Analysis |
| Normative Benchmarks | Rational Decision-Making Model, Expected Utility |
How does bounded rationality explain why satisficing is adaptive rather than irrational? What cognitive constraints make optimization impractical?
Compare the anchoring effect and confirmation bias: both distort judgment, but at what stage of the decision process does each operate?
Why does prospect theory predict different choices when identical outcomes are framed as gains versus losses? What role does the reference point play?
In what types of environments is intuitive decision-making likely to be accurate, and when should decision-makers distrust their intuitions?
Practice prompt: A committee must choose between two policy options under time pressure. Using concepts from this guide, explain two cognitive biases that might affect the group's decision and one analytical tool that could improve the process.