upgrade
upgrade

💕Intro to Cognitive Science

Artificial Intelligence Milestones

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Artificial intelligence isn't just a technology topic—it's central to cognitive science because AI systems serve as computational models of the mind. Every milestone you'll study represents a hypothesis about how thinking works: Can reasoning be reduced to symbol manipulation? Does understanding require embodiment? When a machine solves problems, is it "thinking" in any meaningful sense? You're being tested on your ability to connect these engineering achievements to deeper questions about cognition, representation, and intelligence.

These milestones also trace the evolution of competing theories in cognitive science—from the symbolic AI paradigm (rules and logic) to connectionist approaches (neural networks and learning). Understanding why certain approaches succeeded or failed tells us something profound about the architecture of human cognition. Don't just memorize dates and names—know what each milestone reveals about the nature of thought, and be ready to argue whether machines can truly "understand" or merely simulate understanding.


Symbolic AI and the Classical Paradigm

The earliest AI systems operated on the assumption that intelligence is symbol manipulation—that thinking means applying logical rules to abstract representations. This symbolic AI approach dominated the field for decades and directly tested the hypothesis that human cognition works like a formal reasoning system.

Turing Test (1950)

  • Proposed the "imitation game" as a behavioral criterion for intelligence—if a machine's responses are indistinguishable from a human's, it exhibits intelligent behavior
  • Sidestepped the consciousness question by focusing on observable behavior rather than internal states, influencing behaviorist and functionalist approaches in cognitive science
  • Remains controversial because critics argue it tests linguistic mimicry, not genuine understanding (see: Searle's Chinese Room argument)

Logic Theorist (1956)

  • First program to prove mathematical theorems—developed by Newell and Simon, it proved 38 of the first 52 theorems in Principia Mathematica
  • Demonstrated heuristic search, using shortcuts rather than exhaustive calculation to navigate problem spaces, mimicking human reasoning strategies
  • Launched the symbolic AI paradigm at the Dartmouth Conference, establishing the field's foundational assumption that thinking is computation

General Problem Solver (1959)

  • Aimed to be domain-general—Newell and Simon designed it to solve any problem that could be represented as a goal state and operators
  • Introduced means-ends analysis, a heuristic strategy that reduces the difference between current and goal states, directly modeled on human problem-solving protocols
  • Revealed limitations of pure symbolism—struggled with problems requiring perceptual or embodied knowledge, foreshadowing later critiques

Compare: Logic Theorist vs. General Problem Solver—both used symbolic reasoning and heuristics, but Logic Theorist was domain-specific (math proofs) while GPS attempted domain-general intelligence. This distinction matters for FRQs asking about the generality of cognitive architectures.


Natural Language and the Understanding Question

These systems tackled the hardest problem in AI: human language. They raised fundamental questions about whether processing language is the same as understanding it—a debate that remains central to cognitive science.

ELIZA (1966)

  • Simulated a Rogerian therapist using simple pattern matching and substitution rules—no actual comprehension of meaning
  • Triggered the "ELIZA effect"—users attributed understanding and empathy to the program despite its shallow processing, revealing human tendencies to anthropomorphize
  • Became a touchstone for skeptics who argue that passing behavioral tests doesn't indicate genuine understanding (syntax without semantics)

Watson Wins Jeopardy! (2011)

  • Combined NLP with massive knowledge retrieval—IBM's Watson parsed complex, ambiguous clues and retrieved answers from millions of documents
  • Used statistical confidence scoring rather than "understanding" to select responses, raising questions about whether performance equals comprehension
  • Demonstrated practical AI capability while leaving the understanding question unresolved—Watson couldn't explain why an answer was correct

Compare: ELIZA vs. Watson—both processed natural language, but ELIZA used simple rules while Watson used statistical learning over massive datasets. Neither demonstrates understanding in the philosophical sense, but Watson's performance is far more sophisticated. If asked about the Chinese Room argument, both are relevant examples.


Knowledge-Based and Expert Systems

Rather than pursuing general intelligence, this approach encoded human expertise into specialized systems. It reflected a shift toward narrow AI and tested whether intelligence could be captured as domain-specific rules.

Expert Systems (1970s)

  • Encoded expert knowledge as if-then rules—systems like MYCIN (medical diagnosis) and DENDRAL (chemistry) captured specialist reasoning in knowledge bases
  • Used inference engines to chain rules together, mimicking how human experts reason through cases
  • Succeeded in narrow domains but failed to generalize—revealed that expertise involves tacit knowledge difficult to articulate as explicit rules (the knowledge acquisition bottleneck)

Connectionism and the Neural Network Revolution

The connectionist paradigm proposed that intelligence emerges from distributed processing across networks of simple units—inspired by the brain's architecture. This approach challenged symbolic AI's assumptions about discrete representations.

Backpropagation for Neural Networks (1986)

  • Enabled multi-layer networks to learn—the algorithm adjusts connection weights by propagating error signals backward through the network
  • Revived connectionism after earlier neural network approaches (perceptrons) were shown to have severe limitations
  • Provided a learning mechanism that didn't require explicit programming of rules—the network discovers patterns from examples, more closely modeling how humans learn from experience

ImageNet and Deep Learning Breakthrough (2012)

  • Convolutional neural network (CNN) slashed error rates—AlexNet reduced image classification errors dramatically, outperforming all previous approaches
  • Demonstrated the power of deep architectures with many layers, enabling hierarchical feature learning (edges → shapes → objects)
  • Sparked the current AI revolution—deep learning now dominates computer vision, speech recognition, and NLP, shifting the field away from hand-crafted features

Compare: Expert Systems vs. Deep Learning—expert systems required humans to explicitly encode knowledge, while deep learning extracts patterns automatically from data. This reflects a fundamental debate in cognitive science: is knowledge represented explicitly or distributed implicitly across neural connections?


Game-Playing AI and Strategic Intelligence

Games provide controlled environments to test AI capabilities in strategic reasoning, planning, and learning. Each milestone here represents a different approach to achieving superhuman performance.

Deep Blue Defeats Kasparov (1997)

  • Used brute-force search plus evaluation functions—examined 200 million positions per second, relying on computational power rather than human-like intuition
  • Demonstrated narrow AI supremacy in a well-defined domain with clear rules and perfect information
  • Did not learn or adapt—its chess knowledge was hand-coded by human experts, representing the symbolic AI approach

AlphaGo Defeats World Champion (2016)

  • Combined deep learning with reinforcement learning—trained on human games, then improved by playing millions of games against itself
  • Mastered a game considered too complex for brute force—Go has more possible positions than atoms in the universe, requiring intuitive pattern recognition
  • Made moves human experts couldn't explain—suggesting the network discovered strategies beyond human understanding, raising questions about interpretability in AI

Compare: Deep Blue vs. AlphaGo—Deep Blue used symbolic search and hand-coded evaluation; AlphaGo used neural networks and self-play learning. This contrast perfectly illustrates the shift from GOFAI (Good Old-Fashioned AI) to modern machine learning. Expect FRQs to ask you to analyze what each approach reveals about the nature of expertise and intuition.


Quick Reference Table

ConceptBest Examples
Symbolic AI / Classical ParadigmLogic Theorist, General Problem Solver, Expert Systems
Connectionism / Neural NetworksBackpropagation, ImageNet/Deep Learning, AlphaGo
Natural Language ProcessingELIZA, Watson
Behavioral Tests of IntelligenceTuring Test, ELIZA
Domain-Specific vs. General AIExpert Systems (narrow), GPS (attempted general)
Learning vs. Programmed KnowledgeAlphaGo, Deep Learning (learning) vs. Deep Blue, Expert Systems (programmed)
Human-Computer InteractionELIZA, Watson, Turing Test
Strategic ReasoningDeep Blue, AlphaGo

Self-Check Questions

  1. Compare and contrast the Logic Theorist and AlphaGo. What do their different approaches reveal about competing theories of cognition in cognitive science?

  2. Which two milestones best illustrate the ELIZA effect, and why does this phenomenon matter for debates about machine understanding?

  3. If an FRQ asks you to evaluate the Chinese Room argument, which milestones would you use as examples of systems that process symbols without understanding? Explain your choices.

  4. What distinguishes the symbolic AI paradigm from the connectionist paradigm? Identify one milestone from each approach and explain how they reflect different hypotheses about the architecture of mind.

  5. Both Deep Blue and AlphaGo achieved superhuman performance in games. Why is AlphaGo considered a more significant milestone for understanding human-like intelligence? What does this suggest about the role of learning in cognition?