Fiveable

Calculus IV Unit 7 Review

QR code for Calculus IV practice questions

7.3 Optimization problems in multiple variables

7.3 Optimization problems in multiple variables

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
Calculus IV
Unit & Topic Study Guides

Constrained Optimization

Overview of Constrained Optimization

Constrained optimization is the problem of finding the maximum or minimum of a function when the variables aren't free to take any value they want. Instead, they must satisfy one or more conditions (constraints) that restrict where you can look for solutions.

  • The objective function is the quantity you're trying to optimize, expressed as a function of your decision variables (e.g., f(x,y,z)f(x, y, z)).
  • A constraint is a condition the variables must satisfy for a solution to count. For example, a budget limit or a surface equation.
  • The feasible region is the set of all points that satisfy every constraint simultaneously. Your optimal point must live somewhere in this region.

Key Components and Concepts

Decision variables are the independent variables you can adjust to optimize the objective function.

Constraints come in two flavors:

  • Equality constraints require the variables to satisfy a specific equation, like g(x,y)=0g(x, y) = 0. These are the type Lagrange multipliers handle directly.
  • Inequality constraints specify a range of acceptable values using \leq, \geq, etc. These require additional techniques beyond standard Lagrange multipliers.

Graphing the feasible region (when possible in 2D or 3D) helps you visualize where the constraint boundaries intersect and where the optimum might occur.

Lagrange Multipliers

Overview of Constrained Optimization, Lagrange multiplier - Wikipedia

Introduction to Lagrange Multipliers

The core idea behind Lagrange multipliers is geometric: at a constrained extremum, the gradient of the objective function must be parallel to the gradient of the constraint. If they weren't parallel, you could move along the constraint surface in a direction that still improves the objective function, meaning you haven't found the extremum yet.

The Lagrange multiplier λ\lambda is the scalar that relates these two gradients:

f=λg\nabla f = \lambda \nabla g

This condition, combined with the constraint itself, gives you a system of equations to solve.

Solving Constrained Optimization Problems

To optimize f(x,y)f(x, y) subject to the constraint g(x,y)=0g(x, y) = 0:

  1. Form the Lagrangian function: L(x,y,λ)=f(x,y)λg(x,y)L(x, y, \lambda) = f(x, y) - \lambda \, g(x, y)

  2. Take partial derivatives of LL with respect to each variable and set them equal to zero:

    • Lx=0\frac{\partial L}{\partial x} = 0
    • Ly=0\frac{\partial L}{\partial y} = 0
    • Lλ=0\frac{\partial L}{\partial \lambda} = 0 (this just recovers the constraint g(x,y)=0g(x, y) = 0)
  3. Solve the resulting system of equations simultaneously for xx, yy, and λ\lambda.

  4. Evaluate ff at each critical point to determine which gives the maximum and which gives the minimum.

  5. Check boundary behavior or use second-order conditions if you need to classify the critical points rigorously. The bordered Hessian matrix is the standard tool here: its determinant tells you whether a critical point is a constrained max, constrained min, or neither.

Note on sign convention: Some textbooks write L=f+λgL = f + \lambda g and others write L=fλgL = f - \lambda g. Both work because λ\lambda can be positive or negative. Just be consistent with whichever your course uses.

For problems with multiple constraints g1(x,y,z)=0g_1(x,y,z) = 0 and g2(x,y,z)=0g_2(x,y,z) = 0, you introduce a separate multiplier for each:

L=fλ1g1λ2g2L = f - \lambda_1 g_1 - \lambda_2 g_2

The same procedure applies, but the system of equations is larger.

Interpreting λ\lambda

The multiplier λ\lambda has a concrete meaning: it measures the rate of change of the optimal value of ff with respect to relaxing the constraint. If your constraint is a budget g=0g = 0, then λ\lambda tells you approximately how much the optimum improves per unit increase in budget. This interpretation shows up constantly in economics and engineering.

Overview of Constrained Optimization, Functions of Several Variables · Calculus

Applications

Economic Applications

  • Profit maximization: Maximize profit π(x,y)\pi(x, y) subject to production constraints like limited labor or raw materials.
  • Cost minimization: Minimize a cost function subject to a required output level, e.g., produce at least q0q_0 units.
  • Utility maximization: A consumer maximizes U(x,y)U(x, y) subject to a budget constraint p1x+p2y=Mp_1 x + p_2 y = M. This is one of the most classic Lagrange multiplier setups, and here λ\lambda represents the marginal utility of income.
  • Resource allocation: Distribute limited resources across activities to maximize total output or minimize total cost.

Other Applications

  • Engineering design: Optimize parameters like weight or efficiency subject to physical constraints (strength requirements, material limits).
  • Portfolio optimization: Maximize expected return for a given level of risk, or minimize risk for a target return.
  • Environmental management: Minimize pollution subject to economic or regulatory constraints.
  • Transportation and logistics: Minimize shipping cost or delivery time subject to capacity and demand constraints.