upgrade
upgrade

🖲️Operating Systems

CPU Scheduling Algorithms

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

CPU scheduling is the heart of how an operating system manages process execution—it's the traffic controller deciding which process gets the CPU and for how long. You're being tested on understanding the tradeoffs each algorithm makes between throughput, response time, fairness, and overhead. These concepts appear repeatedly in exam questions about system performance, process management, and operating system design decisions.

Don't just memorize algorithm names and definitions. Know why each algorithm exists, what problem it solves, and what new problems it creates. When you see an exam question about scheduling, you should immediately think: Is this about minimizing wait time? Preventing starvation? Balancing fairness with efficiency? The algorithm you choose—and how you justify it—demonstrates real understanding.


Non-Preemptive Algorithms

These algorithms let a process run to completion (or until it voluntarily yields). Once the CPU is assigned, it stays assigned. The scheduler only makes decisions when a process terminates or blocks.

First-Come, First-Served (FCFS)

  • Simplest scheduling algorithm—processes execute in arrival order, like a basic queue data structure
  • Convoy effect occurs when short processes queue behind long ones, dramatically increasing average wait time
  • Non-preemptive by nature, making it unsuitable for interactive systems where responsiveness matters

Shortest Job First (SJF)

  • Provably optimal for minimizing average waiting time among non-preemptive algorithms
  • Requires knowing burst time in advance, which is often estimated using exponential averaging of past behavior
  • Starvation risk for long processes—if short jobs keep arriving, long jobs may wait indefinitely

Highest Response Ratio Next (HRRN)

  • Response ratio formula: waiting time+burst timeburst time\frac{\text{waiting time} + \text{burst time}}{\text{burst time}}—higher ratio gets scheduled next
  • Prevents starvation by factoring in wait time, giving long-waiting processes increasing priority
  • Balances SJF efficiency with fairness, though calculation overhead is higher than simpler algorithms

Compare: SJF vs. HRRN—both favor shorter jobs initially, but HRRN's aging mechanism prevents indefinite starvation. If an exam asks about improving SJF's fairness, HRRN is your answer.


Preemptive Algorithms

These algorithms can interrupt a running process to give the CPU to another. The scheduler actively reclaims CPU time based on events like new arrivals or timer interrupts.

Shortest Remaining Time First (SRTF)

  • Preemptive version of SJF—when a new process arrives with shorter burst time, it preempts the current process
  • Optimal for minimizing average waiting time among all scheduling algorithms, preemptive or not
  • Context switch overhead and starvation remain significant concerns in high-arrival-rate systems

Round Robin (RR)

  • Time quantum defines the maximum CPU burst before forced preemption and rotation to queue end
  • Fairness guaranteed—every process gets equal CPU time slices, ideal for time-sharing systems
  • Quantum size tradeoff: too small creates excessive context switches; too large degrades to FCFS behavior

Priority Scheduling

  • Priority value determines execution order—lower numbers typically mean higher priority (convention varies)
  • Can be preemptive or non-preemptive, with preemptive version interrupting when higher-priority process arrives
  • Indefinite blocking (starvation) solved through aging, which gradually increases priority of waiting processes

Compare: SRTF vs. Round Robin—SRTF optimizes for throughput and wait time but risks starvation; RR guarantees fairness but may have higher average wait time. FRQs often ask you to recommend one based on system requirements (batch vs. interactive).


Multi-Queue Algorithms

These algorithms use multiple queues to handle different process types or adapt to process behavior. Complexity increases, but so does flexibility and real-world applicability.

Multilevel Queue Scheduling

  • Separate queues for process categories—typically foreground (interactive) and background (batch) with different algorithms per queue
  • Queue priority is fixed, meaning processes stay in their assigned queue permanently
  • Inter-queue scheduling required—often foreground gets absolute priority or a time-slice ratio like 80/20

Multilevel Feedback Queue Scheduling

  • Processes migrate between queues based on CPU burst behavior—long bursts demote, short bursts promote
  • Self-adjusting algorithm that identifies CPU-bound vs. I/O-bound processes automatically over time
  • Most flexible general-purpose scheduler, but requires tuning: number of queues, scheduling per queue, promotion/demotion criteria

Compare: Multilevel Queue vs. Multilevel Feedback Queue—the key difference is mobility. Static assignment works when process types are known; feedback queues adapt when behavior varies. This distinction is a common exam question.


Fairness-Focused Algorithms

These algorithms prioritize equitable CPU distribution, often using sophisticated tracking mechanisms. Modern systems increasingly value fairness alongside raw performance.

Completely Fair Scheduler (CFS)

  • Linux's default scheduler since kernel 2.6.23, replacing the O(1) scheduler
  • Virtual runtime tracking ensures each process receives CPU time proportional to its weight using a red-black tree for O(logn)O(\log n) operations
  • No fixed time quantum—dynamically calculates time slices based on number of runnable processes and target latency

Longest Remaining Time First (LRTF)

  • Inverse of SRTF—prioritizes processes with the longest remaining burst time
  • Rarely used in practice because it maximizes wait time for short processes, opposite of typical optimization goals
  • Theoretical importance for understanding scheduling tradeoffs and as a counterexample in algorithm analysis

Compare: CFS vs. Round Robin—both aim for fairness, but CFS uses proportional allocation with weights while RR uses fixed equal slices. CFS adapts better to varying workloads and is the real-world choice for Linux systems.


Quick Reference Table

ConceptBest Examples
Minimizing average wait timeSJF, SRTF
Guaranteed fairnessRound Robin, CFS
Preventing starvationHRRN, Priority with aging, Multilevel Feedback Queue
Simplest implementationFCFS
Preemptive schedulingSRTF, Round Robin, Preemptive Priority
Adaptive behaviorMultilevel Feedback Queue, CFS
Real-world Linux usageCFS
Batch system optimizationSJF, FCFS

Self-Check Questions

  1. Which two algorithms are provably optimal for minimizing average waiting time, and what distinguishes them from each other?

  2. A system experiences starvation of long-running processes. Which algorithms could cause this problem, and what mechanism solves it in Priority Scheduling?

  3. Compare and contrast Multilevel Queue and Multilevel Feedback Queue scheduling—when would you choose one over the other?

  4. If the time quantum in Round Robin is set extremely large, what algorithm does it effectively become? What happens if it's set extremely small?

  5. An FRQ asks you to design a scheduler for a system running both interactive terminals and background batch jobs. Which algorithm would you recommend and why? What parameters would you need to specify?