Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Understanding CPU pipeline stages is fundamental to grasping how modern processors achieve high performance. You're being tested on concepts like instruction-level parallelism, pipeline hazards, throughput vs. latency tradeoffs, and the fetch-decode-execute cycle. These stages don't exist in isolationโthey work together to allow multiple instructions to be "in flight" simultaneously, which is why a 5-stage pipeline can theoretically improve throughput by up to 5x compared to single-cycle execution.
When exam questions ask about pipeline stalls, data hazards, or control hazards, they're really testing whether you understand what each stage does and what resources it needs. Don't just memorize the stage namesโknow what hardware components are active at each stage, what data flows between stages, and what happens when dependencies force the pipeline to wait. This conceptual understanding will help you tackle FRQ scenarios involving hazard detection, forwarding, and branch prediction.
These first two stages focus on getting the instruction ready for executionโfetching it from memory and figuring out what it actually means. Both stages interact heavily with memory and control logic before any real computation happens.
Compare: IF vs. IDโboth happen before any computation, but IF interacts with instruction memory while ID interacts with the register file. If an FRQ asks where a data hazard is detected, ID is your answer since that's where register values are read.
These middle stages perform the actual workโcalculating results and accessing data memory. This is where the ALU does its job and where load/store instructions interact with the memory hierarchy.
Compare: EX vs. MEMโEX uses the ALU for computation, while MEM uses data memory for storage. R-type instructions only need EX; load/store instructions need both. This distinction matters for understanding which hazards affect which instruction types.
The final stage ensures computed results become visible to future instructions. Without this stage, no instruction would ever produce a lasting effect on processor state.
Compare: MEM vs. WBโboth can provide the final result, but MEM provides data from memory (loads) while WB writes any result to registers. Understanding this split is essential for implementing data forwarding paths.
| Concept | Best Examples |
|---|---|
| Memory interaction | IF (instruction memory), MEM (data memory) |
| Register file access | ID (read), WB (write) |
| ALU usage | EX stage exclusively |
| Control signal generation | ID stage |
| Address calculation | EX (for branches and memory operations) |
| Pipeline register boundaries | IF/ID, ID/EX, EX/MEM, MEM/WB |
| Stages skipped by some instructions | MEM (by R-type), WB (by stores) |
Which two stages interact with memory, and what type of memory does each access?
If a load instruction is followed immediately by an add instruction that uses the loaded value, at which stage is the data hazard detected, and why?
Compare and contrast what happens during the EX stage for an R-type arithmetic instruction versus a load instruction.
A store instruction () uses the MEM stage but not the WB stage. Explain why this makes sense given what each stage does.
If you were implementing data forwarding to reduce stalls, which stages would need forwarding paths between them, and what values would be forwarded?