upgrade
upgrade

🧮Combinatorial Optimization

Key Local Search Techniques

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Local search techniques form the backbone of practical combinatorial optimization—they're how you actually solve problems when exact methods become computationally intractable. You're being tested on understanding the fundamental trade-off every optimizer faces: exploitation (improving the current solution) versus exploration (escaping to find better regions). Each technique in this guide represents a different strategy for balancing this trade-off, and exam questions will probe whether you understand why a particular method works, not just what it does.

These techniques demonstrate core principles like neighborhood structures, acceptance criteria, memory mechanisms, and population dynamics. When you encounter an FRQ asking you to recommend an optimization approach or analyze algorithm behavior, you need to connect the problem characteristics to the right technique. Don't just memorize algorithm names—know what makes each one effective and when it fails.


Gradient-Free Climbing Methods

These techniques improve solutions by examining neighboring candidates and moving toward better ones. The core mechanism is iterative neighborhood evaluation without requiring derivative information.

Hill Climbing

  • Greedy neighbor selection—moves only to solutions that improve the objective function, making it fast but myopic
  • Local optima trapping is the critical weakness; the algorithm terminates when no improving neighbor exists
  • Variants matter: steepest ascent evaluates all neighbors and picks the best; simple hill climbing accepts the first improvement found

Local Search with Restarts

  • Multiple independent runs from different starting points help overcome the single-trajectory limitation of basic hill climbing
  • Diversification through randomization—each restart samples a new region of the solution space
  • Embarrassingly parallel structure makes this technique ideal for distributed computing environments

Compare: Hill Climbing vs. Local Search with Restarts—both use the same neighborhood-based improvement logic, but restarts add diversification at the cost of computational overhead. If an FRQ asks about improving hill climbing's reliability, restarts are your first answer.


Temperature and Probability-Based Escape

These methods accept worse solutions with some probability, allowing escape from local optima. The key insight is that controlled randomness enables global exploration.

Simulated Annealing

  • Metallurgy-inspired cooling schedule—the temperature parameter TT decreases over time, reducing the probability of accepting worse moves
  • Acceptance probability follows P(ΔE)=eΔE/TP(\Delta E) = e^{-\Delta E / T}, where ΔE\Delta E is the cost increase
  • Convergence guarantees exist under specific cooling schedules; sufficiently slow cooling theoretically finds the global optimum
  • Penalty-augmented objective function—adds costs to solution features that appear too frequently in local optima
  • Dynamic landscape modification effectively "fills in" local optima valleys, pushing the search toward unexplored regions
  • Feature-based memory tracks which solution components to penalize, requiring problem-specific feature definitions

Compare: Simulated Annealing vs. Guided Local Search—both escape local optima, but SA uses random acceptance while GLS deterministically reshapes the objective landscape. GLS requires more problem knowledge but often converges faster.


These techniques use explicit memory structures to guide exploration and prevent cycling. Memory transforms local search from memoryless wandering into strategic navigation.

  • Short-term memory (tabu list) forbids recently visited solutions or moves, preventing immediate cycling
  • Aspiration criteria override tabu status when a forbidden move leads to the best-ever solution
  • Long-term memory structures enable intensification (focusing on promising regions) and diversification (exploring neglected areas)
  • Perturbation-based escape—after reaching a local optimum, applies a controlled "kick" to the solution before restarting local search
  • Perturbation strength is critical: too weak returns to the same optimum; too strong becomes random restart
  • Acceptance criteria for the new local optimum can incorporate history, creating implicit memory

Compare: Tabu Search vs. Iterated Local Search—Tabu uses explicit move-level memory during search, while ILS uses perturbation between complete local searches. Tabu offers finer control; ILS is simpler to implement and tune.


Neighborhood Structure Manipulation

These approaches systematically vary how "neighbors" are defined to escape local optima. Changing the neighborhood changes what counts as a local optimum.

  • Systematic neighborhood switching—when stuck in one neighborhood structure, switches to a different (usually larger) one
  • Nested structure: local search uses small neighborhoods for efficiency; shaking uses larger neighborhoods for escape
  • Deterministic vs. stochastic variants exist; the basic VNS uses random shaking followed by deterministic descent

Compare: Variable Neighborhood Search vs. Simulated Annealing—both escape local optima, but VNS does so by changing the definition of "neighbor" while SA changes the acceptance criterion. VNS is often more effective when natural neighborhood hierarchies exist.


Population-Based Methods

These techniques maintain multiple candidate solutions simultaneously, using collective information to guide search. Population diversity enables parallel exploration of the solution space.

Genetic Algorithms

  • Evolution-inspired operators: selection favors fit individuals, crossover combines parent solutions, mutation introduces variation
  • Schema theory suggests GAs efficiently search by implicitly evaluating building blocks shared across solutions
  • Premature convergence occurs when population diversity collapses; maintaining diversity is a key design challenge

Particle Swarm Optimization

  • Swarm intelligence model—each particle tracks its personal best position and the global best across the swarm
  • Velocity update combines inertia, cognitive component (toward personal best), and social component (toward global best)
  • Continuous-space native—originally designed for Rn\mathbb{R}^n; requires adaptation for discrete combinatorial problems

Ant Colony Optimization

  • Pheromone-based communication—artificial ants deposit pheromone on solution components, biasing future construction
  • Evaporation mechanism prevents convergence to suboptimal solutions by gradually reducing old pheromone trails
  • Construction-based rather than improvement-based; ants build solutions incrementally using probabilistic rules

Compare: Genetic Algorithms vs. Ant Colony Optimization—both are population-based metaheuristics, but GAs manipulate complete solutions through recombination while ACO constructs solutions incrementally using learned probabilities. ACO excels on problems with natural sequential structure (like TSP); GAs are more general-purpose.


Quick Reference Table

ConceptBest Examples
Greedy improvementHill Climbing, Steepest Ascent
Probabilistic acceptanceSimulated Annealing
Explicit memoryTabu Search, Guided Local Search
Perturbation-based escapeIterated Local Search
Neighborhood variationVariable Neighborhood Search
Population-based searchGenetic Algorithms, PSO, ACO
Diversification strategiesRestarts, Tabu long-term memory, VNS shaking
Construction heuristicsAnt Colony Optimization

Self-Check Questions

  1. Which two techniques use explicit memory structures to guide search, and how do their memory mechanisms differ?

  2. If you're solving a problem where the objective function is expensive to evaluate but you have access to parallel computing, which technique would you recommend and why?

  3. Compare and contrast how Simulated Annealing and Variable Neighborhood Search escape local optima—what is the fundamental difference in their approach?

  4. An FRQ describes a routing problem where solutions are naturally constructed edge-by-edge. Which population-based method is best suited for this structure, and what mechanism allows it to learn from previous solutions?

  5. Explain why Iterated Local Search requires careful tuning of perturbation strength—what happens if the perturbation is too weak? Too strong?