Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Memory isn't just one thing. It's a collection of systems, processes, and structures that cognitive psychologists have spent decades trying to map. You're being tested on your ability to distinguish between these models and explain why each one matters for understanding how information gets encoded, stored, and retrieved.
The models here represent fundamentally different ways of conceptualizing memory: some focus on structural components (what memory is made of), others on processing dynamics (how information moves through the system), and still others on organizational principles (how knowledge gets structured and connected).
Don't just memorize the names and components of each model. Know what problem each model was designed to solve and how they relate to each other. Can you explain why Baddeley's working memory model was an improvement over the original multi-store model? Can you articulate the difference between structural and processing approaches to memory? These are the kinds of comparative questions that show up on exams, especially in FRQs asking you to evaluate or apply memory theories to real-world scenarios.
These models propose that memory consists of distinct stores or systems, each with unique characteristics and functions. The key insight is that different types of information are handled by specialized components.
This 1968 model was the first widely accepted framework for how memory is organized. It proposes a linear flow: information enters through the senses, gets selected for short-term processing, and may eventually make it into long-term storage.
The model's main limitation is that it treats STM as a single, passive holding area. It can't easily explain why you can hold a phone number in mind while doing a spatial task like walking through a building. That limitation is exactly what motivated Baddeley's revision.
Baddeley and Hitch (1974) proposed this model to replace the idea of a single STM with a multi-component system that actively manipulates information, not just stores it.
Tulving proposed that long-term memory isn't a single store but consists of at least three distinct systems, each with its own encoding and retrieval processes.
Compare: Atkinson-Shiffrin vs. Baddeley: both are structural models, but Atkinson-Shiffrin treats STM as a single passive store while Baddeley breaks it into active, specialized components. If an FRQ asks why someone can remember a phone number while navigating a room, Baddeley's model explains this better because the tasks use separate components (phonological loop vs. visuospatial sketchpad).
These models shift focus from where information is stored to how it's processed. The depth and type of encoding matter more than time spent rehearsing.
Craik and Lockhart (1972) argued that memory isn't about moving information between stores. Instead, how well you remember something depends on how deeply you process it at encoding.
A common criticism of this model is that "depth" is hard to define independently of memory performance, which makes the theory somewhat circular: deep processing leads to better memory, and we know processing was deep because memory was better.
Paivio's (1971) theory proposes that memory has two independent but connected coding systems: one for verbal information and one for visual/imaginal information.
Compare: Levels of Processing vs. Dual Coding: both emphasize encoding quality over storage duration, but Levels of Processing focuses on semantic depth while Dual Coding focuses on representational format. Use Levels of Processing for explaining why studying for meaning beats rote memorization. Use Dual Coding for explaining why flashcards with images outperform text-only cards.
These models conceptualize memory as interconnected nodes rather than separate stores. Retrieval depends on activation patterns spreading through a web of associations.
Collins and Loftus (1975) proposed that semantic memory is organized as a network of concept nodes connected by associative links. The strength of each link reflects how closely related two concepts are.
Rumelhart and McClelland's (1986) PDP model (also called connectionism) takes a very different approach from spreading activation. Instead of discrete concept nodes, memory is represented as patterns of activation distributed across many simple processing units.
Compare: Spreading Activation vs. PDP/Connectionist Models: both use network metaphors, but Spreading Activation focuses on semantic relationships between discrete concepts while PDP models emphasize the neural-like mechanics of how distributed activation patterns emerge. Spreading Activation is better for explaining priming effects. PDP models are better for explaining how memories can be partial, reconstructed, or resistant to localized damage.
These models explain how existing knowledge structures influence the encoding and retrieval of new information. What you already know determines what you can learn and remember.
Bartlett (1932) introduced the idea of schemas, and the concept has been central to cognitive psychology ever since. A schema is an organized mental framework built from past experience that helps you interpret, encode, and retrieve new information.
This distinction (formalized by Tulving, 1972) categorizes the content of long-term declarative memory rather than explaining a processing mechanism.
Compare: Schema Theory vs. Episodic/Semantic Model: Schema Theory explains how organized knowledge structures influence memory processing (and cause distortions), while the Episodic/Semantic distinction categorizes types of long-term memory content. Use Schema Theory to explain memory distortions and reconstructive errors. Use the Episodic/Semantic distinction to explain why you can know facts about your childhood without remembering specific events, or why certain types of amnesia affect one system but not the other.
| Category | Models |
|---|---|
| Structural/Store-Based | Atkinson-Shiffrin, Baddeley's Working Memory, Tulving's Memory Systems |
| Processing-Based | Levels of Processing, Dual Coding Theory |
| Network/Connectionist | Spreading Activation, PDP/Connectionist Models |
| Knowledge Organization | Schema Theory, Episodic/Semantic Memory |
| Explains Encoding Differences | Levels of Processing, Dual Coding |
| Explains Retrieval Mechanisms | Spreading Activation, PDP/Connectionist Models |
| Explains Memory Distortions | Schema Theory |
| Explains Multitasking Limits | Baddeley's Working Memory |
Both the Levels of Processing model and Dual Coding Theory emphasize encoding quality. What distinguishes their explanations for why some information is remembered better than others?
If a patient with brain damage can still recall general facts but cannot remember personal experiences, which model best explains this dissociation, and what specific memory systems are affected?
Compare and contrast the Atkinson-Shiffrin model with Baddeley's Working Memory model. What limitation of the original model did Baddeley's revision address?
A student uses the method of loci (imagining items placed in familiar locations) to memorize a list. Which two memory models best explain why this technique works?
An FRQ asks you to explain why eyewitness testimony can be unreliable. Which memory model would you use, and what specific mechanism would you describe?
How does the PDP model's concept of "graceful degradation" explain why brain damage typically causes partial memory loss rather than the complete erasure of specific memories?