Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Linear programming forms the backbone of optimization theory, and these techniques appear throughout your coursework—from basic feasibility problems to complex network models. You're being tested on your ability to recognize when to apply each method, why certain approaches work for specific problem structures, and how primal-dual relationships reveal deeper insights about resource allocation. Understanding these connections transforms a collection of algorithms into a coherent problem-solving toolkit.
Don't just memorize the steps of the simplex method or the mechanics of the Hungarian algorithm. Know what each technique reveals about the structure of optimal solutions, how duality connects seemingly different problems, and why discrete constraints fundamentally change computational complexity. When exam questions ask you to "explain" or "justify," they're testing whether you understand the underlying mathematical principles—not just whether you can execute an algorithm.
These methods form the foundation for solving linear programs. Each exploits the geometry of the feasible region—optimal solutions occur at vertices of the constraint polytope—but they approach this insight from different angles.
Compare: Simplex Method vs. Graphical Method—both exploit vertex optimality, but simplex scales to thousands of variables while graphical methods cap at two. Use graphical solutions to visualize why simplex works before tackling higher-dimensional problems.
When a starting basic feasible solution isn't obvious, these techniques introduce artificial variables to bootstrap the simplex method. The key insight: penalize artificial variables heavily enough that they exit the basis in any truly feasible problem.
Compare: Big M vs. Two-Phase—both handle artificial variables, but Two-Phase separates feasibility from optimality cleanly. If an FRQ asks about detecting infeasibility, Two-Phase provides a clearer criterion (Phase I objective > 0).
These concepts reveal what optimal solutions mean beyond their numerical values. Duality theory connects every linear program to a mirror problem, while sensitivity analysis explores how solutions respond to parameter changes.
Compare: Duality Theory vs. Sensitivity Analysis—duality gives you shadow prices at the optimum, while sensitivity analysis tells you how far those prices remain valid. Together, they answer "what's this constraint worth?" and "over what range?"
Standard LP assumes continuous variables, but many real-world decisions are inherently discrete. Integer constraints transform polynomial-time problems into NP-hard ones, requiring specialized algorithms.
Compare: Linear Programming vs. Integer Programming—LP relaxation provides a bound on the IP optimal value. The integrality gap (difference between LP and IP optima) measures how much discreteness costs you.
These special LP structures have totally unimodular constraint matrices, guaranteeing integer solutions without explicit integer constraints. This makes them computationally tractable despite their discrete nature.
Compare: Transportation vs. Assignment vs. Network Flow—all three are network LPs with integer solutions guaranteed. Transportation handles many-to-many shipments, assignment handles one-to-one matching, and max-flow handles capacity-constrained routing.
| Concept | Best Examples |
|---|---|
| Vertex optimality | Simplex Method, Graphical Method |
| Artificial variable handling | Big M Method, Two-Phase Method |
| Primal-dual relationships | Duality Theory, Shadow Prices |
| Post-optimality insights | Sensitivity Analysis, Reduced Costs |
| Discrete optimization | Integer Programming, Branch-and-Bound |
| Network structures | Transportation, Assignment, Max-Flow |
| Polynomial-time special cases | Transportation Problem, Assignment Problem |
| Certificate of optimality | Strong Duality, Complementary Slackness |
Both the Big M method and Two-Phase method handle artificial variables—what computational advantage does Two-Phase offer, and when might Big M still be preferred?
If a constraint's shadow price is zero, what does this tell you about the constraint's role in the optimal solution? How would this change if the RHS increased?
Compare the transportation problem and assignment problem: what structural feature do they share that guarantees integer optimal solutions without explicit integer constraints?
Explain why the simplex method's efficiency depends on the geometry of the feasible region. What would happen if optimal solutions could occur at interior points?
An FRQ gives you a primal LP and asks you to interpret the dual variables economically. Using duality theory, how would you explain what the dual optimal values represent in terms of resource valuation?