← back to operating systems

operating systems unit 2 study guides

process management

unit 2 review

Process management is a crucial aspect of operating systems, overseeing the execution of programs and allocation of resources. It involves tracking process states, scheduling CPU time, and facilitating communication between processes to ensure efficient system operation. Key concepts include process states, scheduling algorithms, and inter-process communication methods. Understanding these elements is essential for optimizing system performance, preventing issues like deadlocks and starvation, and developing robust applications in various computing environments.

What's Process Management?

  • Fundamental component of operating systems involves managing processes and their execution
  • Processes represent running programs or applications that require system resources (CPU time, memory)
  • Process management tracks process state, resource usage, and execution order
  • Ensures fair allocation of system resources among competing processes
  • Provides mechanisms for processes to communicate and synchronize with each other
  • Implements scheduling algorithms to determine which process runs next on the CPU
  • Handles process creation and termination, along with any associated cleanup tasks

Key Concepts in Process Management

  • Process represents an instance of a running program, identified by a unique process ID (PID)
  • Process control block (PCB) stores metadata about each process (state, CPU registers, memory limits, open files)
  • Context switching saves the state of the current process and loads the saved state of another process
  • Scheduling determines which process runs next based on factors like priority, CPU burst time, and arrival time
  • Interprocess communication (IPC) enables processes to exchange data and synchronize their execution
  • Shared memory allows multiple processes to access the same memory region for efficient IPC
  • Signals provide a way for the OS or other processes to send notifications to a process

Process States and Lifecycle

  • New: Process is being created, PCB is initialized, but not yet added to the ready queue
  • Ready: Process is waiting to be assigned to a CPU, loaded into memory and ready to execute
  • Running: Instructions are being executed by the CPU, process is actively consuming resources
  • Waiting (Blocked): Process is waiting for some event to occur (I/O completion, signal, resource availability)
    • Transition to waiting state occurs when process makes a system call or is interrupted by the scheduler
  • Terminated: Process has finished execution or has been aborted, releases all acquired resources
  • Suspended Ready: Process is in secondary storage (swapped out) but ready to execute once loaded into memory
  • Suspended Waiting: Process is in secondary storage and waiting for an event before it can resume execution

Process Scheduling Algorithms

  • First-Come, First-Served (FCFS): Non-preemptive, processes are executed in the order they arrive
  • Shortest Job Next (SJN): Non-preemptive, selects the process with the shortest next CPU burst
  • Priority Scheduling: Each process is assigned a priority, highest priority process is executed first
    • Preemptive version allows the current running process to be interrupted by a higher-priority process
  • Round Robin (RR): Preemptive, each process is given a fixed time quantum to execute before being preempted
  • Multilevel Queue: Ready queue is partitioned into separate queues, each with its own scheduling algorithm
  • Multilevel Feedback Queue: Allows processes to move between queues based on their behavior and CPU bursts

Inter-Process Communication

  • Shared Memory: Multiple processes can access a common memory region for fast data exchange
    • Requires synchronization mechanisms (locks, semaphores) to prevent race conditions
  • Message Passing: Processes send messages to each other through communication channels (pipes, sockets)
    • Useful for exchanging small amounts of data or control signals between processes
  • Pipes: Unidirectional communication channels, data written by one process is read by another
    • Named pipes (FIFOs) can be used for communication between unrelated processes
  • Sockets: Bidirectional communication endpoints, support both local and remote inter-process communication
  • Remote Procedure Calls (RPC): Allows a process to execute a procedure in another address space (local or remote)

Process Synchronization

  • Race Condition: Occurs when multiple processes access shared data concurrently, leading to unpredictable results
  • Critical Section: Code segment where a process accesses shared resources, must be executed atomically
  • Mutex Locks: Ensures mutual exclusion by allowing only one process to enter a critical section at a time
  • Semaphores: Generalized synchronization mechanism, can be used for resource counting and process coordination
    • Binary semaphores (mutexes) are used for mutual exclusion, counting semaphores track resource availability
  • Monitors: High-level synchronization construct that encapsulates shared data and provides safe access methods
  • Deadlock: Situation where two or more processes are unable to proceed because each is waiting for the other to release a resource
    • Occurs when four conditions hold simultaneously: mutual exclusion, hold and wait, no preemption, circular wait

Common Process Management Issues

  • Starvation: A process is perpetually denied necessary resources, unable to make progress
    • Can occur in priority scheduling if low-priority processes are constantly bypassed by higher-priority ones
  • Priority Inversion: A high-priority process is indirectly preempted by a lower-priority process
    • Happens when the high-priority process is waiting for a resource held by the low-priority process
  • Convoy Effect: A convoy of processes is formed when multiple processes are waiting for a resource held by a slow process
  • Thrashing: Excessive paging/swapping activity due to insufficient main memory to hold all active processes
    • Results in significant performance degradation as more time is spent on paging than actual execution
  • Orphan Processes: Processes whose parent has terminated without waiting for the child to finish or explicitly killing it
  • Zombie Processes: Terminated processes whose exit status has not been collected by the parent process

Real-World Applications

  • Operating Systems: Process management is a core component of all modern operating systems (Windows, Linux, macOS)
  • Web Servers: Handle multiple client requests concurrently, each request is processed in a separate thread or process
  • Database Management Systems: Use process and thread management to handle concurrent queries and transactions
  • Scientific Simulations: Often involve parallel processing, requiring careful coordination and synchronization between processes
  • Embedded Systems: Manage real-time processes and threads with strict timing constraints (automotive, avionics)
  • Distributed Computing: Involves managing processes across multiple nodes in a cluster or grid environment
  • Mobile Apps: Utilize process and thread management to handle user interactions, background tasks, and system events