upgrade
upgrade

🆗Language and Cognition

Key Concepts in Semantic Networks

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Semantic networks are the mental architecture behind how you store, organize, and retrieve meaning—and they're central to understanding language cognition. When you hear the word "dog" and instantly think of "bark," "pet," or "animal," you're experiencing your semantic network in action. These models explain everything from why some words come to mind faster than others to how we categorize the world around us. You'll be tested on the differences between hierarchical models, feature-based approaches, and connectionist frameworks, as well as the experimental evidence that supports each.

Don't just memorize the names of these models—know what problem each one solves and what cognitive phenomena it explains. The exam loves to ask you to compare approaches (Why does spreading activation explain priming better than a strict hierarchy?) or apply them to real scenarios (How would a connectionist model handle learning a new word?). Understanding the underlying mechanisms will serve you far better than rote recall.


Hierarchical and Structural Models

These foundational models propose that knowledge is organized in tree-like structures, with general categories branching into specific instances. The key mechanism is inheritance—properties stored at higher nodes automatically apply to everything below them.

Collins and Quillian's Model

  • Cognitive economy—properties are stored at the highest relevant node to avoid redundancy (e.g., "has skin" stored at ANIMAL, not repeated for every animal)
  • Retrieval time increases with link distance, meaning verifying "a canary is an animal" takes longer than "a canary is a bird"
  • "Is-a" links define category membership and enable property inheritance throughout the hierarchy

Hierarchical Networks

  • Tree-like organization places superordinate categories at the top with increasingly specific instances below
  • Inheritance mechanism allows efficient storage—you don't need to store "breathes" for every living thing separately
  • Facilitates top-down processing in language comprehension by activating general categories before specific exemplars

Compare: Collins and Quillian's Model vs. general Hierarchical Networks—both use tree structures and inheritance, but Collins and Quillian specifically predicts retrieval times based on link traversal. If an FRQ asks about cognitive economy or reaction time predictions, go with Collins and Quillian.


Activation-Based Models

These models explain how accessing one concept triggers related concepts automatically. The core mechanism is spreading activation—energy flows outward from an activated node, priming connected concepts for faster retrieval.

Spreading Activation Theory

  • Activation spreads through associative links, with strength diminishing as distance from the source increases
  • Explains semantic priming—why you recognize "nurse" faster after seeing "doctor" than after seeing "bread"
  • Accounts for context effects in language processing, where surrounding words pre-activate relevant meanings

Semantic Priming

  • Facilitation effect—response times decrease when a target word follows a semantically related prime
  • Demonstrates automatic activation of related concepts, occurring even when primes are presented subliminally
  • Critical evidence for network connectivity—the strength of priming effects reveals the associative distance between concepts

Compare: Spreading Activation Theory vs. Semantic Priming—spreading activation is the mechanism, while semantic priming is the phenomenon it explains. Know this distinction for multiple-choice questions that ask you to match theories to evidence.


Feature-Based Models

Rather than focusing on hierarchical links, these models represent concepts as bundles of features. Similarity between concepts depends on how many features they share.

Semantic Feature Comparison Model

  • Concepts represented as feature lists containing both defining features (necessary for category membership) and characteristic features (typical but not required)
  • Two-stage comparison process—first a quick check of overall similarity, then a slower verification of defining features if needed
  • Explains typicality effects—why "robin" feels more bird-like than "penguin" (robins share more characteristic bird features)

Compare: Semantic Feature Comparison Model vs. Collins and Quillian—hierarchical models struggle to explain why some category members feel more "typical" than others, but feature comparison handles this easily through characteristic feature overlap. This is a classic exam contrast.


Connectionist and Neural Approaches

These models abandon discrete symbols entirely, representing knowledge as patterns of activation across distributed networks. Learning occurs through adjusting connection weights based on experience.

Connectionist Models

  • Distributed representations—concepts aren't stored in single nodes but as patterns across many interconnected units
  • Learning through weight adjustment allows the network to adapt and generalize from experience
  • Graceful degradation—partial damage doesn't destroy knowledge completely, mimicking how brain damage affects cognition

Compare: Connectionist Models vs. Hierarchical Networks—hierarchical models use explicit symbols and rules, while connectionist models learn implicit patterns. Connectionist approaches better explain how we handle exceptions and novel inputs, but hierarchical models are easier to interpret.


Propositional and Relational Models

These frameworks emphasize that meaning isn't just about concepts—it's about the relationships between them. Knowledge is stored as structured propositions or relational graphs.

Propositional Networks

  • Knowledge stored as propositions—structured statements with predicates and arguments (e.g., CHASE[dog, cat])
  • Captures relational meaning that simple associations miss—"the dog chased the cat" differs from "the cat chased the dog"
  • Supports logical inference by allowing operations on propositional structures

Conceptual Graphs

  • Visual notation system representing concepts as nodes and relations as labeled edges
  • Expresses complex logical relationships including quantification, negation, and nested structures
  • Bridges language and reasoning—useful for understanding how sentence meaning maps to mental representations

Compare: Propositional Networks vs. Conceptual Graphs—both capture relational structure, but conceptual graphs provide a visual formalism that's particularly useful for computational modeling. Propositional networks are more common in psychological theories of text comprehension.


Computational Lexical Resources

These large-scale databases operationalize semantic network principles, providing structured representations of word meanings and relationships.

WordNet

  • Organizes words into synsets—sets of synonyms representing a single concept
  • Encodes multiple relation types: hypernyms (is-a-kind-of), hyponyms (specific-type-of), meronyms (part-of)
  • Foundation for NLP applications including word sense disambiguation, information retrieval, and machine translation

FrameNet

  • Frame-based organization—words are grouped by the conceptual scenarios (frames) they evoke
  • Captures argument structure—specifies the typical participants, props, and settings associated with each frame
  • Reveals how context shapes meaning—the same word activates different frames in different situations

Compare: WordNet vs. FrameNet—WordNet focuses on paradigmatic relations (synonymy, hyponymy), while FrameNet emphasizes syntagmatic relations (what concepts co-occur in situations). For questions about word relationships, think WordNet; for questions about situational meaning, think FrameNet.


Quick Reference Table

ConceptBest Examples
Hierarchical organizationCollins and Quillian's Model, Hierarchical Networks
Activation-based retrievalSpreading Activation Theory, Semantic Priming
Feature-based representationSemantic Feature Comparison Model
Distributed/learned representationsConnectionist Models
Relational knowledge structuresPropositional Networks, Conceptual Graphs
Typicality effectsSemantic Feature Comparison Model
Cognitive economyCollins and Quillian's Model
Computational applicationsWordNet, FrameNet

Self-Check Questions

  1. Which two models both use hierarchical structure but differ in whether they predict specific retrieval times? What additional mechanism does one include that the other lacks?

  2. A participant responds faster to "butter" after seeing "bread" than after seeing "lamp." Which model best explains this result, and what mechanism does it propose?

  3. Compare and contrast how the Semantic Feature Comparison Model and Collins and Quillian's Model would explain why people are faster to verify "a robin is a bird" than "a penguin is a bird."

  4. If an FRQ asks you to explain how someone could still understand language after partial brain damage, which model type provides the best explanation and why?

  5. You're designing a computer system to understand that "The teacher gave the student a book" and "The student received a book from the teacher" mean the same thing. Would WordNet or FrameNet be more useful, and what feature of that resource supports your answer?