Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Artificial intelligence isn't just a technology topic—it's central to cognitive science because AI systems serve as computational models of the mind. Every milestone you'll study represents a hypothesis about how thinking works: Can reasoning be reduced to symbol manipulation? Does understanding require embodiment? When a machine solves problems, is it "thinking" in any meaningful sense? You're being tested on your ability to connect these engineering achievements to deeper questions about cognition, representation, and intelligence.
These milestones also trace the evolution of competing theories in cognitive science—from the symbolic AI paradigm (rules and logic) to connectionist approaches (neural networks and learning). Understanding why certain approaches succeeded or failed tells us something profound about the architecture of human cognition. Don't just memorize dates and names—know what each milestone reveals about the nature of thought, and be ready to argue whether machines can truly "understand" or merely simulate understanding.
The earliest AI systems operated on the assumption that intelligence is symbol manipulation—that thinking means applying logical rules to abstract representations. This symbolic AI approach dominated the field for decades and directly tested the hypothesis that human cognition works like a formal reasoning system.
Compare: Logic Theorist vs. General Problem Solver—both used symbolic reasoning and heuristics, but Logic Theorist was domain-specific (math proofs) while GPS attempted domain-general intelligence. This distinction matters for FRQs asking about the generality of cognitive architectures.
These systems tackled the hardest problem in AI: human language. They raised fundamental questions about whether processing language is the same as understanding it—a debate that remains central to cognitive science.
Compare: ELIZA vs. Watson—both processed natural language, but ELIZA used simple rules while Watson used statistical learning over massive datasets. Neither demonstrates understanding in the philosophical sense, but Watson's performance is far more sophisticated. If asked about the Chinese Room argument, both are relevant examples.
Rather than pursuing general intelligence, this approach encoded human expertise into specialized systems. It reflected a shift toward narrow AI and tested whether intelligence could be captured as domain-specific rules.
The connectionist paradigm proposed that intelligence emerges from distributed processing across networks of simple units—inspired by the brain's architecture. This approach challenged symbolic AI's assumptions about discrete representations.
Compare: Expert Systems vs. Deep Learning—expert systems required humans to explicitly encode knowledge, while deep learning extracts patterns automatically from data. This reflects a fundamental debate in cognitive science: is knowledge represented explicitly or distributed implicitly across neural connections?
Games provide controlled environments to test AI capabilities in strategic reasoning, planning, and learning. Each milestone here represents a different approach to achieving superhuman performance.
Compare: Deep Blue vs. AlphaGo—Deep Blue used symbolic search and hand-coded evaluation; AlphaGo used neural networks and self-play learning. This contrast perfectly illustrates the shift from GOFAI (Good Old-Fashioned AI) to modern machine learning. Expect FRQs to ask you to analyze what each approach reveals about the nature of expertise and intuition.
| Concept | Best Examples |
|---|---|
| Symbolic AI / Classical Paradigm | Logic Theorist, General Problem Solver, Expert Systems |
| Connectionism / Neural Networks | Backpropagation, ImageNet/Deep Learning, AlphaGo |
| Natural Language Processing | ELIZA, Watson |
| Behavioral Tests of Intelligence | Turing Test, ELIZA |
| Domain-Specific vs. General AI | Expert Systems (narrow), GPS (attempted general) |
| Learning vs. Programmed Knowledge | AlphaGo, Deep Learning (learning) vs. Deep Blue, Expert Systems (programmed) |
| Human-Computer Interaction | ELIZA, Watson, Turing Test |
| Strategic Reasoning | Deep Blue, AlphaGo |
Compare and contrast the Logic Theorist and AlphaGo. What do their different approaches reveal about competing theories of cognition in cognitive science?
Which two milestones best illustrate the ELIZA effect, and why does this phenomenon matter for debates about machine understanding?
If an FRQ asks you to evaluate the Chinese Room argument, which milestones would you use as examples of systems that process symbols without understanding? Explain your choices.
What distinguishes the symbolic AI paradigm from the connectionist paradigm? Identify one milestone from each approach and explain how they reflect different hypotheses about the architecture of mind.
Both Deep Blue and AlphaGo achieved superhuman performance in games. Why is AlphaGo considered a more significant milestone for understanding human-like intelligence? What does this suggest about the role of learning in cognition?