๐Ÿ†—Language and Cognition

Key Concepts in Semantic Networks

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Semantic networks are the mental architecture behind how you store, organize, and retrieve meaning. They're central to understanding language cognition. When you hear the word "dog" and instantly think of "bark," "pet," or "animal," you're experiencing your semantic network in action. These models explain everything from why some words come to mind faster than others to how we categorize the world around us. You'll be tested on the differences between hierarchical models, feature-based approaches, and connectionist frameworks, as well as the experimental evidence that supports each.

Don't just memorize the names of these models. Know what problem each one solves and what cognitive phenomena it explains. Exams often ask you to compare approaches (Why does spreading activation explain priming better than a strict hierarchy?) or apply them to real scenarios (How would a connectionist model handle learning a new word?). Understanding the underlying mechanisms will serve you far better than rote recall.


Hierarchical and Structural Models

These foundational models propose that knowledge is organized in tree-like structures, with general categories branching into specific instances. The key mechanism is inheritance: properties stored at higher nodes automatically apply to everything below them.

Collins and Quillian's Model

  • Cognitive economy is the central principle. Properties are stored at the highest relevant node to avoid redundancy. For example, "has skin" is stored at the ANIMAL node rather than repeated for every individual animal.
  • Retrieval time increases with link distance. Verifying "a canary is an animal" takes longer than "a canary is a bird" because you have to traverse more links in the hierarchy. Collins and Quillian (1969) demonstrated this with sentence verification tasks where reaction times tracked the number of levels separating two nodes.
  • "Is-a" links define category membership and enable property inheritance throughout the hierarchy.

A major limitation: this model predicts retrieval time based purely on distance, but it can't easily explain why some category members (like "robin" for BIRD) are verified faster than others at the same hierarchical distance (like "ostrich" for BIRD). That's where feature-based models step in.

Hierarchical Networks

  • Tree-like organization places superordinate categories at the top with increasingly specific instances below.
  • Inheritance mechanism allows efficient storage. You don't need to store "breathes" for every living thing separately.
  • Facilitates top-down processing in language comprehension by activating general categories before specific exemplars.

Compare: Collins and Quillian's Model vs. general Hierarchical Networks: both use tree structures and inheritance, but Collins and Quillian specifically predicts retrieval times based on link traversal. If a question asks about cognitive economy or reaction time predictions, go with Collins and Quillian.


Activation-Based Models

These models explain how accessing one concept triggers related concepts automatically. The core mechanism is spreading activation: energy flows outward from an activated node, priming connected concepts for faster retrieval.

Spreading Activation Theory

Collins and Loftus (1975) developed this as a revision of the earlier Collins and Quillian model. Instead of a strict hierarchy, concepts are connected in a flexible web of associative links that vary in strength.

  • Activation spreads through associative links, with strength diminishing as distance from the source increases.
  • Explains semantic priming: you recognize "nurse" faster after seeing "doctor" than after seeing "bread," because activation has already spread from "doctor" to nearby medical concepts.
  • Accounts for context effects in language processing, where surrounding words pre-activate relevant meanings. This is why ambiguous words (like "bank") tend to be interpreted correctly in context.

Semantic Priming

Semantic priming is the phenomenon that spreading activation (the mechanism) explains. Keep this distinction clear.

  • Facilitation effect: response times decrease when a target word follows a semantically related prime. In a typical lexical decision task, participants respond about 20-40 ms faster to related targets.
  • Demonstrates automatic activation of related concepts, occurring even when primes are presented subliminally (below conscious awareness), which suggests the process doesn't require deliberate effort.
  • Critical evidence for network connectivity: the strength of priming effects reveals the associative distance between concepts. Stronger priming = closer or stronger connections.

Compare: Spreading Activation Theory vs. Semantic Priming: spreading activation is the mechanism, while semantic priming is the phenomenon it explains. Know this distinction for multiple-choice questions that ask you to match theories to evidence.


Feature-Based Models

Rather than focusing on hierarchical links, these models represent concepts as bundles of features. Similarity between concepts depends on how many features they share.

Semantic Feature Comparison Model

Smith, Shoben, and Rips (1974) proposed this model to address something hierarchical models couldn't handle well: typicality effects.

  • Concepts are represented as feature lists containing both defining features (necessary for category membership, like "has feathers" for BIRD) and characteristic features (typical but not required, like "can fly" for BIRD).
  • Two-stage comparison process:
    1. Stage 1: A quick check compares the overall feature similarity between the item and the category. If similarity is very high, you say "yes" immediately. If very low, you say "no" immediately.
    2. Stage 2: If similarity falls in an intermediate range, a slower, more careful check examines only the defining features to make a final decision.
  • Explains typicality effects: "Robin" feels more bird-like than "penguin" because robins share more characteristic bird features (flies, small, sings). Typical members get verified in Stage 1; atypical members require the slower Stage 2.

Compare: Semantic Feature Comparison Model vs. Collins and Quillian: hierarchical models struggle to explain why some category members feel more "typical" than others, but feature comparison handles this through characteristic feature overlap. This is a classic exam contrast.


Connectionist and Neural Approaches

These models abandon discrete symbols entirely, representing knowledge as patterns of activation across distributed networks. Learning occurs through adjusting connection weights based on experience.

Connectionist Models

  • Distributed representations: concepts aren't stored in single nodes but as patterns across many interconnected processing units. The concept "dog" isn't in one place; it's a specific pattern of activation across the network.
  • Learning through weight adjustment: the network adjusts the strength of connections between units based on exposure to input. This is typically modeled using backpropagation or Hebbian learning rules, allowing the network to generalize from experience.
  • Graceful degradation: partial damage doesn't destroy knowledge completely. Performance declines gradually rather than catastrophically, which closely mimics how brain damage affects cognition. A patient with semantic dementia, for instance, may lose fine-grained distinctions before losing broad categories.

Compare: Connectionist Models vs. Hierarchical Networks: hierarchical models use explicit symbols and rules, while connectionist models learn implicit patterns from statistical regularities. Connectionist approaches better explain how we handle exceptions and novel inputs, but hierarchical models are easier to interpret and test with specific predictions.


Propositional and Relational Models

These frameworks emphasize that meaning isn't just about concepts in isolation. It's about the relationships between them. Knowledge is stored as structured propositions or relational graphs.

Propositional Networks

  • Knowledge stored as propositions: structured statements with predicates and arguments, such as CHASE(dog, cat). Each proposition captures a specific relationship.
  • Captures relational meaning that simple associations miss. "The dog chased the cat" and "the cat chased the dog" activate the same nodes but have different propositional structures, preserving who did what to whom.
  • Supports logical inference by allowing operations on propositional structures. If you know CHASE(dog, cat) and FLEE(cat), you can infer a causal connection.

Conceptual Graphs

  • Visual notation system representing concepts as nodes and relations as labeled directed edges.
  • Expresses complex logical relationships including quantification, negation, and nested structures, going beyond what simple associative networks can capture.
  • Bridges language and reasoning: useful for understanding how sentence meaning maps to mental representations, and widely used in computational modeling of language understanding.

Compare: Propositional Networks vs. Conceptual Graphs: both capture relational structure, but conceptual graphs provide a visual formalism that's particularly useful for computational modeling. Propositional networks are more common in psychological theories of text comprehension.


Computational Lexical Resources

These large-scale databases operationalize semantic network principles, providing structured representations of word meanings and relationships that can be used in both research and applications.

WordNet

  • Organizes words into synsets: sets of synonyms representing a single concept. For example, {car, auto, automobile} form one synset.
  • Encodes multiple relation types: hypernyms (is-a-kind-of), hyponyms (specific-type-of), meronyms (part-of), and antonyms.
  • Foundation for NLP applications including word sense disambiguation, information retrieval, and machine translation.

FrameNet

  • Frame-based organization: words are grouped by the conceptual scenarios (frames) they evoke. The "Commercial Transaction" frame, for instance, includes buy, sell, pay, cost, and spend.
  • Captures argument structure: specifies the typical participants, props, and settings associated with each frame. A commercial transaction involves a Buyer, Seller, Goods, and Money.
  • Reveals how context shapes meaning: the same word activates different frames in different situations. "Serve" evokes different frames in a restaurant context vs. a tennis context.

Compare: WordNet vs. FrameNet: WordNet focuses on paradigmatic relations (synonymy, hyponymy, how words relate to each other in a taxonomy), while FrameNet emphasizes syntagmatic relations (what concepts co-occur in situations). For questions about word relationships, think WordNet. For questions about situational meaning and argument structure, think FrameNet.


Quick Reference Table

ConceptBest Examples
Hierarchical organizationCollins and Quillian's Model, Hierarchical Networks
Activation-based retrievalSpreading Activation Theory, Semantic Priming
Feature-based representationSemantic Feature Comparison Model
Distributed/learned representationsConnectionist Models
Relational knowledge structuresPropositional Networks, Conceptual Graphs
Typicality effectsSemantic Feature Comparison Model
Cognitive economyCollins and Quillian's Model
Computational applicationsWordNet, FrameNet

Self-Check Questions

  1. Which two models both use hierarchical structure but differ in whether they predict specific retrieval times? What additional mechanism does one include that the other lacks?

  2. A participant responds faster to "butter" after seeing "bread" than after seeing "lamp." Which model best explains this result, and what mechanism does it propose?

  3. Compare and contrast how the Semantic Feature Comparison Model and Collins and Quillian's Model would explain why people are faster to verify "a robin is a bird" than "a penguin is a bird."

  4. If a question asks you to explain how someone could still understand language after partial brain damage, which model type provides the best explanation and why?

  5. You're designing a computer system to understand that "The teacher gave the student a book" and "The student received a book from the teacher" mean the same thing. Would WordNet or FrameNet be more useful, and what feature of that resource supports your answer?