Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Process management is the beating heart of any operating system—it's how your OS juggles dozens or hundreds of programs simultaneously while making it look effortless. You're being tested on your understanding of how processes are created, scheduled, synchronized, and terminated, and more importantly, why these mechanisms exist. Every concept here connects to fundamental trade-offs: efficiency vs. fairness, parallelism vs. safety, and performance vs. overhead.
When exam questions hit, they won't just ask you to define a PCB or list scheduling algorithms. They'll ask you to analyze scenarios: Which algorithm minimizes wait time? What happens when synchronization fails? How does the OS prevent resource conflicts? Don't just memorize the terms—know what problem each technique solves and what trade-offs it introduces.
Before a process can be scheduled or synchronized, the OS must understand what a process is and how to track it. These foundational concepts—states, transitions, and control blocks—form the vocabulary for everything else in process management.
Compare: Process States vs. PCB—states describe where a process is in its lifecycle, while the PCB stores everything the OS knows about that process. FRQs often ask how a state transition updates the PCB.
Understanding how processes begin and end reveals the OS's role as a resource manager. The kernel must allocate resources at creation and reclaim them at termination—failures here cause memory leaks and zombie processes.
fork() in Unix/Linux duplicates the parent process, while CreateProcess() in Windows builds a new process from scratchScheduling determines which process runs when—the OS must balance fairness, efficiency, and responsiveness. These techniques directly impact user experience and system throughput.
Compare: Round Robin vs. Priority Scheduling—RR guarantees fairness through equal time slices, while Priority Scheduling optimizes for importance but requires starvation prevention. If an FRQ asks about real-time systems, Priority Scheduling is your answer; for time-sharing systems, discuss RR.
When the CPU switches between processes, the OS must preserve and restore execution state perfectly. Context switching is the mechanical process that makes multitasking possible—but it comes at a cost.
Compare: Context Switching vs. Mode Switching—context switching changes which process runs (expensive), while mode switching changes privilege level within the same process (cheaper). Know the difference for questions about system call overhead.
When multiple processes or threads share resources, chaos lurks. Synchronization and communication mechanisms prevent race conditions, deadlocks, and data corruption—the nightmares of concurrent programming.
Compare: Synchronization vs. IPC—synchronization controls access to shared resources, while IPC enables data transfer between processes. A mutex prevents two processes from corrupting shared memory; a pipe lets them send messages. FRQs may ask which mechanism fits a given scenario.
Threads offer lightweight concurrency within a single process. By sharing address space while maintaining separate execution contexts, threads reduce overhead compared to full processes—but introduce new synchronization challenges.
Compare: Processes vs. Threads—processes have isolated memory (safer but expensive to create/switch), while threads share memory (faster but require synchronization). This trade-off appears constantly in systems design questions.
| Concept | Best Examples |
|---|---|
| Process Lifecycle | Process States, PCB, Creation/Termination |
| Scheduling Strategies | FCFS, Round Robin, Priority Scheduling, SJN |
| Preemption Trade-offs | Preemptive vs. Non-preemptive, Context Switch Overhead |
| Synchronization Primitives | Mutexes, Semaphores, Monitors |
| IPC Methods | Pipes, Message Queues, Shared Memory |
| Concurrency Models | Multithreading, Process-based Parallelism |
| Starvation Prevention | Aging, Fair Scheduling Algorithms |
| State Preservation | PCB, Context Switching |
A process moves from Running to Waiting state. What event likely caused this transition, and what PCB fields would the OS update?
Compare Round Robin and Shortest Job Next scheduling: which minimizes average wait time, and which guarantees fairness? What's the trade-off?
You're designing a system where multiple processes must access a shared database. Would you use a mutex, a semaphore, or a monitor? Justify your choice and explain what could go wrong without synchronization.
Explain why threads are "lighter weight" than processes. What do threads share, and what must remain separate? What new problems does this sharing introduce?
A low-priority background process hasn't run in hours despite being in the Ready state. Identify the problem and describe a scheduling technique that would solve it.