๐Ÿ–ฒ๏ธOperating Systems

Process Management Techniques

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Process management is how your OS juggles dozens or hundreds of programs at the same time while making it look effortless. You're being tested on your understanding of how processes are created, scheduled, synchronized, and terminated, and more importantly, why these mechanisms exist. Every concept here connects to fundamental trade-offs: efficiency vs. fairness, parallelism vs. safety, and performance vs. overhead.

Exam questions won't just ask you to define a PCB or list scheduling algorithms. They'll ask you to analyze scenarios: Which algorithm minimizes wait time? What happens when synchronization fails? How does the OS prevent resource conflicts? Don't just memorize the terms. Know what problem each technique solves and what trade-offs it introduces.


Process Lifecycle Fundamentals

Before a process can be scheduled or synchronized, the OS must understand what a process is and how to track it. These foundational concepts form the vocabulary for everything else in process management.

Process States and Transitions

Every process passes through a defined set of states during its lifetime. The five core states are:

  • New โ€” the process is being created and resources are being allocated
  • Ready โ€” the process is loaded in memory and waiting for CPU time
  • Running โ€” the process's instructions are actively executing on the CPU
  • Waiting (Blocked) โ€” the process can't continue until some event completes (like an I/O operation finishing or a signal arriving)
  • Terminated โ€” execution is done and the OS is cleaning up resources

Transitions are triggered by specific events. A scheduling decision moves a process from Ready โ†’ Running. An I/O request triggers Running โ†’ Waiting. When that I/O completes, the process moves Waiting โ†’ Ready (not directly back to Running). Completion moves a process to Terminated.

State diagrams are common on exams. Expect questions asking you to trace a process through states given a sequence of events.

Process Control Blocks (PCB)

The PCB is the OS's data structure for tracking everything about a single process. Each process gets exactly one PCB, and it contains:

  • Process ID (PID) and current state
  • CPU register values (program counter, stack pointer, general-purpose registers)
  • Memory management info (page tables, segment tables, base/limit registers)
  • Scheduling info (priority level, scheduling queue pointers)
  • I/O and resource info (open file descriptors, allocated devices)

The PCB is what makes context switching possible. When the OS pauses a process, it saves the full execution state into the PCB. When it resumes that process later, it restores everything from the PCB so the process picks up exactly where it left off.

Compare: Process States vs. PCB โ€” states describe where a process is in its lifecycle, while the PCB stores everything the OS knows about that process. FRQs often ask how a state transition updates the PCB.


Process Creation and Termination

Understanding how processes begin and end reveals the OS's role as a resource manager. The kernel must allocate resources at creation and reclaim them at termination. Failures here cause memory leaks and zombie processes.

Creation and Termination Mechanisms

System calls initiate creation. In Unix/Linux, fork() duplicates the calling (parent) process, creating a child with a copy of the parent's address space. The child then typically calls exec() to replace that copy with a new program. In Windows, CreateProcess() handles both steps at once, building a new process from scratch.

After fork(), both parent and child continue executing from the same point, but fork() returns different values to each: 0 to the child, and the child's PID to the parent. This is how each process knows which role it plays.

Termination can be voluntary or forced. A process exits normally by calling exit() when it finishes its work. Abnormal termination results from unhandled errors, signals (like SIGKILL), or a parent process deciding to kill a child.

Resource cleanup is critical. The OS must deallocate memory, close file handles, and notify the parent process. If a child terminates but its parent never calls wait() to collect its exit status, the child becomes a zombie process, occupying a slot in the process table without doing useful work. If the parent terminates first, the child becomes an orphan, which the OS typically reassigns to the init process (PID 1).


CPU Scheduling Mechanisms

Scheduling determines which process runs when. The OS must balance fairness, efficiency, and responsiveness, and these goals often conflict with each other.

Process Scheduling

The scheduler selects from the ready queue based on a chosen algorithm. The selection criteria might include arrival time, estimated burst length, or assigned priority.

A key distinction is preemptive vs. non-preemptive scheduling:

  • Preemptive โ€” the OS can interrupt a running process and move it back to Ready (better responsiveness, used by most modern OSes)
  • Non-preemptive (cooperative) โ€” a process keeps the CPU until it voluntarily yields or blocks (simpler to implement, but one misbehaving process can hog the CPU)

Common scheduling goals include maximizing throughput (processes completed per unit time), minimizing turnaround time (total time from submission to completion), minimizing waiting time (time spent in the ready queue), and minimizing response time (time from submission to first output). Optimizing for one often hurts another.

CPU Scheduling Algorithms

FCFS (First-Come, First-Served) โ€” Processes run in arrival order. Simple to implement (just a FIFO queue), but suffers from the convoy effect: if a long process arrives first, every short process behind it waits. Average wait time can be very high.

Shortest Job Next (SJN) / Shortest Job First (SJF) โ€” Picks the process with the smallest estimated CPU burst. This provably minimizes average wait time, but it has two problems: you need to predict burst lengths (usually done with exponential averaging of past bursts), and long processes can starve if short ones keep arriving. The preemptive version is called Shortest Remaining Time First (SRTF).

Round Robin (RR) โ€” Each process gets a fixed time quantum (e.g., 10โ€“100 ms). When the quantum expires, the process is preempted and placed at the back of the ready queue. RR guarantees fairness, but performance depends heavily on quantum size. Too small = excessive context switch overhead. Too large = degrades toward FCFS behavior.

Priority Scheduling โ€” Each process is assigned a priority, and the highest-priority process runs first. Useful for real-time systems where certain tasks genuinely matter more. The risk is starvation of low-priority processes.

Process Priorities

Priority levels can be assigned statically (at creation) or dynamically (adjusted at runtime). In real-time systems, priorities ensure critical tasks like sensor readings or safety controls always get CPU time.

Starvation occurs when low-priority processes sit in the ready queue indefinitely because higher-priority work keeps arriving. The standard solution is aging: the OS gradually increases a process's priority the longer it waits. This guarantees every process eventually runs.

Compare: Round Robin vs. Priority Scheduling โ€” RR guarantees fairness through equal time slices, while Priority Scheduling optimizes for importance but requires starvation prevention. For real-time systems, Priority Scheduling is the better fit; for general-purpose time-sharing systems, RR is more appropriate.


Context and State Management

When the CPU switches between processes, the OS must preserve and restore execution state perfectly. Context switching is the mechanism that makes multitasking possible, but it comes at a cost.

Context Switching

A context switch happens whenever the OS takes the CPU away from one process and gives it to another. Here's the sequence:

  1. An interrupt, system call, or preemption timer fires, transferring control to the kernel.
  2. The OS saves the current process's CPU state (registers, program counter, stack pointer) into its PCB.
  3. The current process's state is updated (e.g., Running โ†’ Ready or Running โ†’ Waiting).
  4. The scheduler selects the next process from the ready queue.
  5. The OS loads that process's saved state from its PCB into the CPU registers.
  6. The new process's state is set to Running, and execution resumes where it left off.

Overhead is the key concern. During a context switch, the CPU does zero useful work for any process. Frequent switching (from a very small RR quantum, for example) wastes cycles on housekeeping. The switch itself typically takes microseconds, but cache and TLB invalidation can add hidden costs since the new process's data likely isn't in cache yet.

Compare: Context Switching vs. Mode Switching โ€” a context switch changes which process runs (expensive, involves saving/restoring full state). A mode switch changes privilege level within the same process, like when a user-mode process makes a system call and the CPU switches to kernel mode (cheaper, no process state swap needed). Know this distinction for questions about system call overhead.


Concurrency and Coordination

When multiple processes or threads share resources, things can go wrong fast. Synchronization and communication mechanisms prevent race conditions, deadlocks, and data corruption.

Process Synchronization

A race condition occurs when the outcome of a program depends on the unpredictable timing of multiple processes accessing shared data. For example, if two processes both read a counter, increment it, and write it back, one update can be lost.

The critical section problem is about ensuring that when one process is executing code that accesses shared data, no other process can execute its corresponding critical section. A correct solution must satisfy three requirements:

  • Mutual exclusion โ€” only one process in the critical section at a time
  • Progress โ€” if no process is in the critical section, a waiting process can enter without indefinite delay
  • Bounded waiting โ€” a limit on how many times other processes can enter before a waiting process gets its turn

Core synchronization mechanisms:

  • Mutex (mutual exclusion lock) โ€” a binary lock. A process acquires the mutex before entering the critical section and releases it when done. Only one holder at a time.
  • Semaphore โ€” a generalized counter. A binary semaphore acts like a mutex. A counting semaphore allows up to nn processes to access a resource simultaneously (useful for resource pools like database connections).
  • Monitor โ€” a higher-level construct that bundles shared data, procedures, and synchronization into one unit. Only one process can be active inside a monitor at a time, and condition variables allow processes to wait for specific conditions.

Without proper synchronization, you get race conditions (corrupted data) or deadlocks (processes waiting on each other in a cycle, none able to proceed).

Inter-Process Communication (IPC)

Unlike threads, processes have isolated memory spaces. They can't just read each other's variables. IPC provides explicit channels for data exchange.

  • Pipes โ€” unidirectional byte streams, typically between a parent and child process. Simple but limited to related processes (named pipes/FIFOs remove this restriction).
  • Message queues โ€” structured messages sent to a queue that the receiving process reads from. More flexible than pipes since processes don't need a parent-child relationship.
  • Shared memory โ€” a region of memory mapped into multiple processes' address spaces. The fastest IPC method since there's no kernel involvement after setup, but you must use synchronization (like semaphores) to prevent race conditions on the shared region.

Synchronous (blocking) IPC makes the sender wait until the receiver gets the message, which simplifies coordination but can cause delays. Asynchronous (non-blocking) IPC lets the sender continue immediately, improving responsiveness but requiring more careful design to handle messages that arrive at unpredictable times.

Compare: Synchronization vs. IPC โ€” synchronization controls access to shared resources (preventing corruption), while IPC enables data transfer between processes. A mutex prevents two processes from corrupting shared memory; a pipe lets them send messages. Exam questions may ask which mechanism fits a given scenario.


Threads and Parallelism

Threads offer lightweight concurrency within a single process. By sharing an address space while maintaining separate execution contexts, threads reduce overhead compared to full processes, but they introduce new synchronization challenges.

Multithreading

A thread is the smallest unit of CPU scheduling. Multiple threads within the same process share:

  • Code section
  • Data section (global variables)
  • Open files and signals
  • Heap memory

Each thread maintains its own:

  • Program counter
  • Register set
  • Stack

This sharing is why threads are "lighter weight" than processes. Creating a thread doesn't require duplicating an entire address space, and switching between threads in the same process is faster because memory mappings don't change (no TLB flush needed).

Threads improve performance in two main ways. Parallelism: on a multi-core system, threads can truly run simultaneously on different cores. Concurrency: even on a single core, one thread can handle I/O while another does computation, keeping the CPU busy.

The downside is that shared memory means threads can corrupt each other's data without proper synchronization. A bug in one thread (like a buffer overflow) can crash the entire process since all threads share the same address space.

Compare: Processes vs. Threads โ€” processes have isolated memory (safer but expensive to create and switch), while threads share memory (faster but require synchronization). This trade-off appears constantly in systems design questions.


Quick Reference Table

ConceptKey Examples
Process LifecycleProcess States, PCB, Creation/Termination
Scheduling StrategiesFCFS, Round Robin, Priority Scheduling, SJN/SJF
Preemption Trade-offsPreemptive vs. Non-preemptive, Context Switch Overhead
Synchronization PrimitivesMutexes, Semaphores, Monitors
IPC MethodsPipes, Message Queues, Shared Memory
Concurrency ModelsMultithreading, Process-based Parallelism
Starvation PreventionAging, Fair Scheduling Algorithms
State PreservationPCB, Context Switching

Self-Check Questions

  1. A process moves from Running to Waiting state. What event likely caused this transition, and what PCB fields would the OS update?

  2. Compare Round Robin and Shortest Job Next scheduling: which minimizes average wait time, and which guarantees fairness? What's the trade-off for each?

  3. You're designing a system where multiple processes must access a shared database. Would you use a mutex, a semaphore, or a monitor? Justify your choice and explain what could go wrong without synchronization.

  4. Explain why threads are "lighter weight" than processes. What do threads share, and what must remain separate? What new problems does this sharing introduce?

  5. A low-priority background process hasn't run in hours despite being in the Ready state. Identify the problem and describe a scheduling technique that would solve it.