Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Every data structures exam tests your ability to choose the right tool for the job—and that means understanding not just what operations exist, but why certain structures handle them efficiently while others don't. You're being tested on time complexity trade-offs, memory access patterns, and the fundamental question: when does constant time matter, and when is logarithmic good enough?
The operations covered here—insertion, deletion, traversal, and search—form the backbone of algorithm analysis. Whether you're writing an FRQ about implementing a priority queue or analyzing why a hash table outperforms a linked list for lookups, you need to connect each operation to its underlying mechanism: contiguous vs. linked memory, ordering constraints, and hashing principles. Don't just memorize that arrays have insertion—know why shifting elements creates that cost.
These structures store elements in a defined order and require traversal to access arbitrary elements. The key trade-off is between memory locality (arrays) and flexibility (linked lists).
Compare: Arrays vs. Linked Lists—both store ordered sequences, but arrays offer random access while linked lists offer insertion/deletion at known nodes. If an FRQ asks about implementing a playlist with frequent additions, linked list is your answer; for binary search, you need an array.
Stacks and queues restrict access to specific ends of the structure. This constraint is a feature, not a limitation—it guarantees constant-time operations and models real-world processes.
Compare: Stacks vs. Queues—both offer insertion and removal, but stacks reverse order (LIFO) while queues preserve it (FIFO). When asked about undo functionality, think stack; for processing requests fairly, think queue.
Trees and heaps organize data in parent-child relationships. The branching factor and balance constraints determine whether operations achieve logarithmic efficiency.
Compare: Binary Search Trees vs. Heaps—BSTs maintain full sorted order (in-order traversal) while heaps only guarantee the root is extreme. Use a BST when you need range queries; use a heap when you only need repeated access to the minimum or maximum.
Hash tables trade ordering for speed by using a hash function to compute storage locations directly. The magic of average case depends entirely on minimizing collisions.
Compare: Hash Tables vs. Binary Search Trees—hash tables offer average lookup but provide no ordering; BSTs guarantee operations while maintaining sorted order. If you need to find all keys in a range, BST wins; for simple key-value lookups, hash tables dominate.
Graphs model relationships without hierarchical constraints. BFS and DFS are the two fundamental strategies, each suited to different problem types.
Compare: BFS vs. DFS—both traverse all reachable nodes in time, but BFS finds shortest paths in unweighted graphs while DFS excels at detecting cycles and topological ordering. FRQs about "minimum number of steps" signal BFS; "is there a cycle?" signals DFS.
These algorithms transform and query data. Understanding their complexity guarantees helps you choose the right approach for different input characteristics.
Compare: Quicksort vs. Mergesort—both achieve average case, but quicksort risks worst case while mergesort guarantees . When stability matters or you're sorting linked lists, choose mergesort; for in-place sorting of arrays, quicksort is typically faster in practice.
| Concept | Best Examples |
|---|---|
| access/operations | Array indexing, stack push/pop, queue enqueue/dequeue, hash table lookup (average) |
| operations | Binary search, balanced BST operations, heap insert/extract |
| traversal required | Linked list access, array insertion/deletion, linear search |
| sorting | Mergesort (guaranteed), quicksort (average), heapsort |
| Ordering preserved | Arrays, linked lists, BSTs (in-order), queues |
| No ordering guarantee | Hash tables, heaps (beyond root), sets |
| Best for priority queues | Heaps (min-heap or max-heap) |
| Best for key-value lookup | Hash tables (average ), BSTs (ordered operations) |
Which two structures offer insertion and deletion but differ in the order elements are removed? What real-world scenarios favor each?
A hash table and a balanced BST both support insertion, deletion, and lookup. Compare their time complexities and explain when you would choose a BST despite its slower average case.
You need to process customer service requests in the order they arrive. Which data structure should you use, and what are its core operations?
Compare BFS and DFS: both have complexity, so what determines which algorithm to use for a given graph problem?
An FRQ asks you to implement a system that always returns the smallest element efficiently. Compare using a sorted array, a BST, and a min-heap—which offers the best trade-off for repeated insertions and extractions?