upgrade
upgrade

💾Intro to Computer Architecture

Addressing Modes in Assembly Language

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Understanding addressing modes is fundamental to mastering computer architecture because they reveal how the CPU actually locates and retrieves data during instruction execution. You're being tested on the trade-offs between speed, flexibility, and memory efficiency—concepts that appear throughout processor design, from instruction encoding to cache optimization. When an exam asks about performance implications or why certain code patterns exist, addressing modes are often the underlying answer.

Don't just memorize the syntax of each mode. Instead, focus on when and why each mode is optimal: What problem does it solve? What's the cost in clock cycles or instruction size? The best exam answers connect addressing modes to broader concepts like instruction cycle efficiency, data structure implementation, and position-independent code. Master the trade-offs, and you'll handle any FRQ that asks you to analyze or compare instruction formats.


Speed-Optimized Modes: No Memory Access Required

These modes prioritize execution speed by keeping operands in the fastest storage locations—either embedded in the instruction itself or stored in registers. The key principle: fewer memory accesses mean faster execution.

Immediate Addressing

  • Operand embedded directly in the instruction—no memory fetch required, making this the fastest addressing mode
  • Best for constants and known values like loop counters, bit masks, or initialization values (e.g., MOV R1, #5)
  • Limited by instruction format size—typically 8-bit or 16-bit immediate fields constrain the range of values

Register Addressing

  • Operand stored in a CPU register—registers are the fastest accessible storage in the memory hierarchy
  • Eliminates memory bottlenecks since register-to-register operations complete in a single cycle on most architectures
  • Constrained by register count—RISC architectures may have 32 registers, while x86 historically had far fewer general-purpose options

Compare: Immediate vs. Register Addressing—both avoid memory access for speed, but immediate embeds a fixed value in the instruction while register addressing references a variable value that can change at runtime. If an FRQ asks about loading constants vs. performing arithmetic on variables, this distinction is key.


Memory-Direct Modes: Explicit Address Specification

These modes access main memory by specifying addresses directly or through a single level of indirection. The trade-off: simpler addressing logic but slower execution due to memory access latency.

Direct Addressing

  • Memory address hardcoded in the instruction—straightforward but requires one memory access to fetch the operand
  • Useful for global variables and fixed memory-mapped locations where the address never changes
  • Address field size limits reach—a 16-bit address field can only access 2162^{16} memory locations directly

Indirect Addressing

  • Address stored in a register or memory location—the instruction points to where the address is, not the data itself
  • Enables dynamic memory access essential for pointers, linked lists, and runtime-determined locations
  • Requires extra memory fetch—first retrieve the address, then retrieve the data, doubling memory access overhead

Compare: Direct vs. Indirect Addressing—direct is faster (one memory access) but inflexible, while indirect adds a memory fetch but supports dynamic data structures. When asked about implementing pointers or linked lists, indirect addressing is your answer.


Computed Address Modes: Base + Offset Calculations

These modes calculate the effective address by combining a base value with an offset or index. The underlying mechanism: Effective Address=Base+Offset\text{Effective Address} = \text{Base} + \text{Offset}, enabling efficient access to structured data.

Base Register Addressing

  • Base register plus constant offset—ideal for accessing fields within structs or records (e.g., LOAD R1, 100(R2))
  • Base register holds the starting address while the offset identifies specific members, keeping code clean and relocatable
  • Supports relocatable data structures—change the base register, and all field accesses automatically adjust

Indexed Addressing

  • Base address plus variable index—the index typically lives in a register, enabling iteration (e.g., LOAD R1, (R2 + R3))
  • Perfect for array traversal where R2 holds the array start and R3 increments through elements
  • Scales with element size—some architectures multiply the index by 2, 4, or 8 to handle different data types automatically

Compare: Base Register vs. Indexed Addressing—both compute addresses, but base register uses a constant offset (good for struct fields) while indexed uses a variable index (good for array iteration). FRQs often ask which mode suits arrays vs. records—know the difference.


Control Flow and Stack Modes: Program Structure Support

These modes support branching, function calls, and local variable management. They're essential for implementing high-level constructs like loops, conditionals, and recursion.

Relative Addressing

  • Offset added to the program counter (PC)—the effective address is PC+Offset\text{PC} + \text{Offset}, enabling position-independent jumps
  • Essential for branches and loopsJMP LABEL calculates the target relative to the current instruction
  • Enables relocatable code—programs can load at any memory address without modifying branch targets

Stack Addressing

  • Implicit addressing via stack pointer (SP)—push and pop operations automatically manage the address
  • LIFO structure supports function calls—return addresses, parameters, and local variables are naturally scoped
  • Critical for recursion—each function call gets its own stack frame, isolating local state automatically

Compare: Relative vs. Stack Addressing—relative addressing handles horizontal control flow (jumps and branches) while stack addressing handles vertical control flow (function call/return hierarchy). Both enable modular, position-independent code but serve different structural purposes.


Quick Reference Table

ConceptBest Examples
Fastest execution (no memory access)Immediate, Register
Fixed memory locationsDirect Addressing
Dynamic/pointer-based accessIndirect Addressing
Struct/record field accessBase Register Addressing
Array traversal and iterationIndexed Addressing
Position-independent branchingRelative Addressing
Function calls and recursionStack Addressing
Computed effective addressBase Register, Indexed, Relative

Self-Check Questions

  1. Which two addressing modes avoid memory access entirely, and why does this matter for execution speed?

  2. You're implementing a linked list traversal. Which addressing mode is essential, and what's the performance cost compared to direct addressing?

  3. Compare base register addressing and indexed addressing: if you're accessing the third field of a struct vs. the third element of an array, which mode fits each scenario?

  4. Why does relative addressing enable position-independent code, and what instruction type most commonly uses it?

  5. An FRQ asks you to explain how a function call stores its return address and local variables. Which addressing mode and data structure are involved, and what memory access pattern do they use?