Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Local search techniques form the backbone of practical combinatorial optimization—they're how you actually solve problems when exact methods become computationally intractable. You're being tested on understanding the fundamental trade-off every optimizer faces: exploitation (improving the current solution) versus exploration (escaping to find better regions). Each technique in this guide represents a different strategy for balancing this trade-off, and exam questions will probe whether you understand why a particular method works, not just what it does.
These techniques demonstrate core principles like neighborhood structures, acceptance criteria, memory mechanisms, and population dynamics. When you encounter an FRQ asking you to recommend an optimization approach or analyze algorithm behavior, you need to connect the problem characteristics to the right technique. Don't just memorize algorithm names—know what makes each one effective and when it fails.
These techniques improve solutions by examining neighboring candidates and moving toward better ones. The core mechanism is iterative neighborhood evaluation without requiring derivative information.
Compare: Hill Climbing vs. Local Search with Restarts—both use the same neighborhood-based improvement logic, but restarts add diversification at the cost of computational overhead. If an FRQ asks about improving hill climbing's reliability, restarts are your first answer.
These methods accept worse solutions with some probability, allowing escape from local optima. The key insight is that controlled randomness enables global exploration.
Compare: Simulated Annealing vs. Guided Local Search—both escape local optima, but SA uses random acceptance while GLS deterministically reshapes the objective landscape. GLS requires more problem knowledge but often converges faster.
These techniques use explicit memory structures to guide exploration and prevent cycling. Memory transforms local search from memoryless wandering into strategic navigation.
Compare: Tabu Search vs. Iterated Local Search—Tabu uses explicit move-level memory during search, while ILS uses perturbation between complete local searches. Tabu offers finer control; ILS is simpler to implement and tune.
These approaches systematically vary how "neighbors" are defined to escape local optima. Changing the neighborhood changes what counts as a local optimum.
Compare: Variable Neighborhood Search vs. Simulated Annealing—both escape local optima, but VNS does so by changing the definition of "neighbor" while SA changes the acceptance criterion. VNS is often more effective when natural neighborhood hierarchies exist.
These techniques maintain multiple candidate solutions simultaneously, using collective information to guide search. Population diversity enables parallel exploration of the solution space.
Compare: Genetic Algorithms vs. Ant Colony Optimization—both are population-based metaheuristics, but GAs manipulate complete solutions through recombination while ACO constructs solutions incrementally using learned probabilities. ACO excels on problems with natural sequential structure (like TSP); GAs are more general-purpose.
| Concept | Best Examples |
|---|---|
| Greedy improvement | Hill Climbing, Steepest Ascent |
| Probabilistic acceptance | Simulated Annealing |
| Explicit memory | Tabu Search, Guided Local Search |
| Perturbation-based escape | Iterated Local Search |
| Neighborhood variation | Variable Neighborhood Search |
| Population-based search | Genetic Algorithms, PSO, ACO |
| Diversification strategies | Restarts, Tabu long-term memory, VNS shaking |
| Construction heuristics | Ant Colony Optimization |
Which two techniques use explicit memory structures to guide search, and how do their memory mechanisms differ?
If you're solving a problem where the objective function is expensive to evaluate but you have access to parallel computing, which technique would you recommend and why?
Compare and contrast how Simulated Annealing and Variable Neighborhood Search escape local optima—what is the fundamental difference in their approach?
An FRQ describes a routing problem where solutions are naturally constructed edge-by-edge. Which population-based method is best suited for this structure, and what mechanism allows it to learn from previous solutions?
Explain why Iterated Local Search requires careful tuning of perturbation strength—what happens if the perturbation is too weak? Too strong?