Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
CPU scheduling is the heart of how an operating system manages process execution—it's the traffic controller deciding which process gets the CPU and for how long. You're being tested on understanding the tradeoffs each algorithm makes between throughput, response time, fairness, and overhead. These concepts appear repeatedly in exam questions about system performance, process management, and operating system design decisions.
Don't just memorize algorithm names and definitions. Know why each algorithm exists, what problem it solves, and what new problems it creates. When you see an exam question about scheduling, you should immediately think: Is this about minimizing wait time? Preventing starvation? Balancing fairness with efficiency? The algorithm you choose—and how you justify it—demonstrates real understanding.
These algorithms let a process run to completion (or until it voluntarily yields). Once the CPU is assigned, it stays assigned. The scheduler only makes decisions when a process terminates or blocks.
Compare: SJF vs. HRRN—both favor shorter jobs initially, but HRRN's aging mechanism prevents indefinite starvation. If an exam asks about improving SJF's fairness, HRRN is your answer.
These algorithms can interrupt a running process to give the CPU to another. The scheduler actively reclaims CPU time based on events like new arrivals or timer interrupts.
Compare: SRTF vs. Round Robin—SRTF optimizes for throughput and wait time but risks starvation; RR guarantees fairness but may have higher average wait time. FRQs often ask you to recommend one based on system requirements (batch vs. interactive).
These algorithms use multiple queues to handle different process types or adapt to process behavior. Complexity increases, but so does flexibility and real-world applicability.
Compare: Multilevel Queue vs. Multilevel Feedback Queue—the key difference is mobility. Static assignment works when process types are known; feedback queues adapt when behavior varies. This distinction is a common exam question.
These algorithms prioritize equitable CPU distribution, often using sophisticated tracking mechanisms. Modern systems increasingly value fairness alongside raw performance.
Compare: CFS vs. Round Robin—both aim for fairness, but CFS uses proportional allocation with weights while RR uses fixed equal slices. CFS adapts better to varying workloads and is the real-world choice for Linux systems.
| Concept | Best Examples |
|---|---|
| Minimizing average wait time | SJF, SRTF |
| Guaranteed fairness | Round Robin, CFS |
| Preventing starvation | HRRN, Priority with aging, Multilevel Feedback Queue |
| Simplest implementation | FCFS |
| Preemptive scheduling | SRTF, Round Robin, Preemptive Priority |
| Adaptive behavior | Multilevel Feedback Queue, CFS |
| Real-world Linux usage | CFS |
| Batch system optimization | SJF, FCFS |
Which two algorithms are provably optimal for minimizing average waiting time, and what distinguishes them from each other?
A system experiences starvation of long-running processes. Which algorithms could cause this problem, and what mechanism solves it in Priority Scheduling?
Compare and contrast Multilevel Queue and Multilevel Feedback Queue scheduling—when would you choose one over the other?
If the time quantum in Round Robin is set extremely large, what algorithm does it effectively become? What happens if it's set extremely small?
An FRQ asks you to design a scheduler for a system running both interactive terminals and background batch jobs. Which algorithm would you recommend and why? What parameters would you need to specify?