upgrade
upgrade

📊Mathematical Methods for Optimization

Key Concepts in Linear Programming Techniques

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Linear programming forms the backbone of optimization theory, and these techniques appear throughout your coursework—from basic feasibility problems to complex network models. You're being tested on your ability to recognize when to apply each method, why certain approaches work for specific problem structures, and how primal-dual relationships reveal deeper insights about resource allocation. Understanding these connections transforms a collection of algorithms into a coherent problem-solving toolkit.

Don't just memorize the steps of the simplex method or the mechanics of the Hungarian algorithm. Know what each technique reveals about the structure of optimal solutions, how duality connects seemingly different problems, and why discrete constraints fundamentally change computational complexity. When exam questions ask you to "explain" or "justify," they're testing whether you understand the underlying mathematical principles—not just whether you can execute an algorithm.


Core Solution Algorithms

These methods form the foundation for solving linear programs. Each exploits the geometry of the feasible region—optimal solutions occur at vertices of the constraint polytope—but they approach this insight from different angles.

Simplex Method

  • Iterative vertex-hopping algorithm—moves along edges of the feasible region, improving the objective value at each step until reaching optimality
  • Tableau format organizes coefficients systematically, with pivot operations revealing basic feasible solutions and their corresponding objective values
  • Detects special cases including unboundedness (objective improves indefinitely) and infeasibility (no solution satisfies all constraints)

Graphical Method for Two-Variable Problems

  • Visual representation of constraints as half-planes, with the feasible region forming a convex polygon in R2\mathbb{R}^2
  • Optimal solution at a vertex—evaluate the objective function at corner points to identify the maximum or minimum
  • Limited to two variables but invaluable for building geometric intuition about constraint interactions and objective function gradients

Compare: Simplex Method vs. Graphical Method—both exploit vertex optimality, but simplex scales to thousands of variables while graphical methods cap at two. Use graphical solutions to visualize why simplex works before tackling higher-dimensional problems.


Handling Infeasibility and Artificial Variables

When a starting basic feasible solution isn't obvious, these techniques introduce artificial variables to bootstrap the simplex method. The key insight: penalize artificial variables heavily enough that they exit the basis in any truly feasible problem.

Big M Method

  • Introduces large constant MM in the objective function to penalize artificial variables, driving them to zero in optimal solutions
  • Single-phase approach—solves the modified problem directly, with MM chosen large enough to dominate other coefficients
  • Risk of numerical instability when MM is too large relative to problem data, potentially causing computational errors

Two-Phase Method

  • Phase I minimizes artificial variable sum—if the minimum equals zero, a feasible solution exists; otherwise, the problem is infeasible
  • Phase II optimizes the original objective starting from the feasible basis found in Phase I, with all artificial variables removed
  • Numerically stable because it avoids arbitrary large constants, making it preferred for computational implementations

Compare: Big M vs. Two-Phase—both handle artificial variables, but Two-Phase separates feasibility from optimality cleanly. If an FRQ asks about detecting infeasibility, Two-Phase provides a clearer criterion (Phase I objective > 0).


Duality and Solution Analysis

These concepts reveal what optimal solutions mean beyond their numerical values. Duality theory connects every linear program to a mirror problem, while sensitivity analysis explores how solutions respond to parameter changes.

Duality Theory

  • Every primal LP has a dual—if the primal maximizes cTx\mathbf{c}^T\mathbf{x} subject to AxbA\mathbf{x} \leq \mathbf{b}, the dual minimizes bTy\mathbf{b}^T\mathbf{y} subject to ATycA^T\mathbf{y} \geq \mathbf{c}
  • Strong duality theorem states optimal primal and dual objective values are equal, providing a certificate of optimality
  • Shadow prices (dual variables) quantify the marginal value of relaxing each constraint by one unit

Sensitivity Analysis

  • Allowable ranges identify how much objective coefficients or RHS values can change before the optimal basis changes
  • Reduced costs indicate how much a non-basic variable's objective coefficient must improve before it enters the basis
  • Binding vs. non-binding constraints—binding constraints have positive shadow prices; non-binding constraints have zero shadow prices

Compare: Duality Theory vs. Sensitivity Analysis—duality gives you shadow prices at the optimum, while sensitivity analysis tells you how far those prices remain valid. Together, they answer "what's this constraint worth?" and "over what range?"


Discrete and Structured Problems

Standard LP assumes continuous variables, but many real-world decisions are inherently discrete. Integer constraints transform polynomial-time problems into NP-hard ones, requiring specialized algorithms.

Integer Programming

  • Decision variables restricted to integers (or binary 0/1), modeling indivisible resources like machines, people, or yes/no decisions
  • Branch-and-bound systematically partitions the feasible region, using LP relaxations to prune suboptimal branches
  • Cutting planes add constraints that exclude fractional solutions without eliminating integer-feasible points

Compare: Linear Programming vs. Integer Programming—LP relaxation provides a bound on the IP optimal value. The integrality gap (difference between LP and IP optima) measures how much discreteness costs you.


Network and Assignment Models

These special LP structures have totally unimodular constraint matrices, guaranteeing integer solutions without explicit integer constraints. This makes them computationally tractable despite their discrete nature.

Transportation Problem

  • Minimizes shipping costs from mm suppliers to nn consumers, with supply constraints jxijsi\sum_j x_{ij} \leq s_i and demand constraints ixijdj\sum_i x_{ij} \geq d_j
  • Special-purpose algorithms (Northwest Corner, Vogel's Approximation, MODI) exploit the problem structure for faster solutions than general simplex
  • Balanced vs. unbalanced—add dummy supply or demand nodes to handle cases where total supply ≠ total demand

Assignment Problem

  • One-to-one matching of nn agents to nn tasks, minimizing total cost with constraints ensuring each agent and task appears exactly once
  • Hungarian algorithm solves in O(n3)O(n^3) time by iteratively reducing the cost matrix and finding augmenting paths
  • Special case of transportation where all supplies and demands equal 1, but specialized algorithms outperform general methods

Network Flow Problems

  • Maximizes flow from source to sink through a capacitated network, respecting edge capacities and flow conservation at nodes
  • Ford-Fulkerson method repeatedly finds augmenting paths, increasing flow until no path from source to sink exists in the residual graph
  • Min-cut max-flow theorem states maximum flow equals the minimum capacity of any cut separating source from sink

Compare: Transportation vs. Assignment vs. Network Flow—all three are network LPs with integer solutions guaranteed. Transportation handles many-to-many shipments, assignment handles one-to-one matching, and max-flow handles capacity-constrained routing.


Quick Reference Table

ConceptBest Examples
Vertex optimalitySimplex Method, Graphical Method
Artificial variable handlingBig M Method, Two-Phase Method
Primal-dual relationshipsDuality Theory, Shadow Prices
Post-optimality insightsSensitivity Analysis, Reduced Costs
Discrete optimizationInteger Programming, Branch-and-Bound
Network structuresTransportation, Assignment, Max-Flow
Polynomial-time special casesTransportation Problem, Assignment Problem
Certificate of optimalityStrong Duality, Complementary Slackness

Self-Check Questions

  1. Both the Big M method and Two-Phase method handle artificial variables—what computational advantage does Two-Phase offer, and when might Big M still be preferred?

  2. If a constraint's shadow price is zero, what does this tell you about the constraint's role in the optimal solution? How would this change if the RHS increased?

  3. Compare the transportation problem and assignment problem: what structural feature do they share that guarantees integer optimal solutions without explicit integer constraints?

  4. Explain why the simplex method's efficiency depends on the geometry of the feasible region. What would happen if optimal solutions could occur at interior points?

  5. An FRQ gives you a primal LP and asks you to interpret the dual variables economically. Using duality theory, how would you explain what the dual optimal values represent in terms of resource valuation?