Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Synchronization primitives are the foundation of everything you'll encounter in concurrent programming and operating systems design. When you're tested on OS concepts, you're not just being asked to define a mutex or semaphore—you're being evaluated on whether you understand why different primitives exist, when to use each one, and what trade-offs they involve. These concepts connect directly to bigger themes like process coordination, resource management, deadlock prevention, and system performance optimization.
The primitives covered here demonstrate core principles: mutual exclusion, signaling and waiting, busy-waiting vs. blocking, and lock-free programming. Each primitive solves a specific synchronization problem, and exam questions often ask you to identify which tool fits which scenario. Don't just memorize definitions—know what problem each primitive solves and when you'd choose one over another.
These primitives enforce the fundamental rule that only one thread can access a critical resource at a time. The core mechanism is acquiring exclusive ownership before entering a critical section and releasing it upon exit.
Compare: Mutex vs. Spinlock—both enforce mutual exclusion, but mutexes block and context-switch while spinlocks busy-wait. Choose spinlocks for very short critical sections on multiprocessor systems; choose mutexes when wait times are unpredictable or long.
These primitives allow threads to communicate about state changes and coordinate their execution. The key mechanism is enabling threads to sleep efficiently until a condition becomes true, rather than wasting cycles polling.
wait() releases the mutex and blocks; signal() wakes one waiting thread; broadcast() wakes all waiting threadsCompare: Semaphore vs. Condition Variable—semaphores maintain state (the count) and can be signaled before any thread waits, while condition variables are stateless and signals are lost if no thread is waiting. Use semaphores for resource counting; use condition variables for complex predicates.
These primitives synchronize groups of threads at specific execution points. The mechanism involves blocking threads until all participants reach a synchronization point or complete their phase.
Compare: Barrier vs. Condition Variable—barriers synchronize a fixed group of threads at a known point, while condition variables synchronize threads based on arbitrary runtime conditions. Barriers are simpler when you know exactly how many threads must synchronize.
These primitives improve concurrency for workloads where reads vastly outnumber writes. The mechanism allows shared access for readers while maintaining exclusive access for writers.
Compare: Mutex vs. Read-Write Lock—mutexes serialize all access regardless of operation type, while read-write locks allow reader parallelism. Use read-write locks when read-to-write ratio is high (e.g., 100:1); the overhead isn't worth it for balanced workloads.
These primitives provide synchronization without traditional blocking locks. The mechanism relies on hardware-supported instructions that complete atomically, enabling progress guarantees even when threads are delayed.
compare_and_swap(addr, expected, new), fetch_and_add(addr, value), and test_and_set(addr)Compare: Lock-based vs. Lock-free—lock-based approaches are simpler to reason about but risk deadlock and blocking delays; lock-free approaches using atomic operations guarantee progress but are harder to implement correctly. Lock-free is preferred for high-contention scenarios in real-time systems.
| Concept | Best Examples |
|---|---|
| Exclusive access (one thread) | Mutex, Spinlock, Critical Section |
| Resource counting/pooling | Counting Semaphore |
| Wait for condition | Condition Variable, Monitor |
| Group synchronization | Barrier |
| Read-heavy workloads | Read-Write Lock |
| Lock-free programming | Atomic Operations (CAS, fetch-and-add) |
| Short critical sections | Spinlock |
| Complex synchronization logic | Monitor, Condition Variable + Mutex |
Which two primitives both enforce mutual exclusion but differ in how they handle waiting—and when would you choose each one?
A producer-consumer problem requires threads to wait until a buffer has space (producer) or data (consumer). Which primitive combination would you use, and why wouldn't a simple mutex suffice?
Compare and contrast semaphores and condition variables: what happens if you signal each one when no thread is currently waiting?
Your system has a shared configuration object read by 50 threads per second but updated only once per minute. Which synchronization primitive optimizes this workload, and what risk must you mitigate?
FRQ-style: Explain why atomic operations like compare-and-swap enable lock-free data structures, and describe one advantage and one disadvantage compared to mutex-based synchronization.