Operations research (OR) is the backbone of industrial engineering. It's where math meets real-world problem-solving. When you're tested on OR techniques, you're being evaluated on your ability to select the right tool for the right problem. Can you recognize when a situation calls for optimization versus simulation? Do you understand why some problems need integer solutions while others work with continuous variables?
These distinctions matter because industrial engineers don't just crunch numbers. They translate messy business problems into solvable mathematical models.
The techniques in this guide fall into distinct categories: optimization methods, stochastic modeling, planning tools, and predictive analytics. Each category addresses a fundamentally different type of decision problem. Don't just memorize definitions. Know what makes each technique appropriate for specific scenarios and how they connect to broader concepts like resource allocation, uncertainty management, and system efficiency.
Optimization Methods
These techniques find the best possible solution given constraints. The core principle: translate real-world limitations into mathematical inequalities, then systematically search for the optimal point.
Linear Programming
Maximizes or minimizes a linear objective function subject to linear constraints. This is the foundation of all optimization in OR.
Decision variables are continuous, meaning solutions can take any value (e.g., 3.7 units). This makes computation more tractable because the feasible region forms a smooth, convex shape.
Graphical solutions work for two variables; the simplex algorithm handles larger problems by moving along vertices of the feasible region. The optimal solution always lies at a corner point (vertex) of the feasible region.
Integer Programming
Requires some or all decision variables to be whole numbers. This is essential when fractional solutions don't make sense (you can't hire 2.3 employees or open 0.6 of a warehouse).
Mixed-integer programming (MIP) allows some variables to be continuous and others to be integers. Pure integer programming restricts all variables to integers. Binary (0-1) integer programming limits variables to just 0 or 1, which is useful for yes/no decisions like facility location.
Computationally much harder than LP because the feasible region is no longer continuous. You can't simply solve the LP relaxation and round, since the rounded solution may be infeasible or far from optimal.
Branch-and-bound algorithms systematically explore possible integer solutions by dividing the problem into smaller subproblems, solving LP relaxations at each node, and pruning branches that can't improve on the best known solution.
Network Analysis
Models systems as nodes and arcs to find optimal paths, flows, or assignments. Think supply chains, transportation routes, and communication networks.
Three classic problem types you'll encounter:
Shortest path finds the minimum-cost route between two nodes (e.g., Dijkstra's algorithm)
Maximum flow determines the greatest throughput from source to sink given arc capacities
Minimum spanning tree connects all nodes at the lowest total cost without forming cycles
Transportation and assignment problems are special LP structures with their own efficient solution methods (like the Hungarian algorithm for assignment), so they can be solved faster than with general-purpose LP solvers.
Compare: Linear Programming vs. Integer Programming: both optimize linear objectives with constraints, but LP allows continuous solutions while IP requires discrete values. If an exam problem involves scheduling workers or selecting facilities, IP is your answer; if it's about production mix quantities, LP likely applies.
Stochastic and Probabilistic Models
When uncertainty drives the system, deterministic optimization won't cut it. These techniques incorporate randomness and probability to model real-world variability.
Queuing Theory
Analyzes waiting line behavior using arrival rates (ฮป) and service rates (ฮผ) to predict congestion and delays.
Key metrics include average wait time (W), average queue length (Lqโ), and server utilization (ฯ=ฮป/ฮผ). These are critical for designing service systems that balance cost against customer satisfaction.
Kendall's notation describes queuing models in the format A/S/c (arrival process / service process / number of servers). The M/M/1 model has Poisson arrivals, exponential service times, and one server. The M/M/c model extends this to multiple servers. The "M" stands for Markovian (memoryless), which is what makes these models analytically solvable.
A key stability condition: the system only reaches steady state when ฯ<1, meaning the service rate must exceed the arrival rate.
Markov Chains
Models state transitions where the future depends only on the present state, not on how you got there. This "memoryless" property (the Markov property) dramatically simplifies analysis of complex dynamic systems.
Transition probability matrices capture the likelihood of moving between states in one time step. Each row sums to 1. By raising the matrix to higher powers, you can find the probability of being in any state after n steps.
Steady-state (long-run) probabilities describe where the system spends its time on average, regardless of starting state. These are found by solving ฯP=ฯ where ฯ is the steady-state vector and P is the transition matrix.
Mimics system behavior over time when analytical solutions are impossible or impractical. This is your go-to for complex, stochastic systems with interactions that defy clean mathematical formulation.
Monte Carlo methods use repeated random sampling to estimate outcomes (useful for static problems like estimating ฯ or valuing financial options). Discrete-event simulation tracks state changes at specific event times and is better suited for dynamic systems like factory floors or hospital emergency departments.
Enables "what-if" analysis without disrupting real operations. You can test a new factory layout virtually before spending millions on implementation.
The tradeoff: simulation gives you statistical estimates, not guaranteed optimal solutions. You'll need enough replications to get reliable confidence intervals on your output metrics.
Compare: Queuing Theory vs. Simulation: queuing theory provides closed-form solutions for idealized systems (Poisson arrivals, exponential service), while simulation handles any distribution and complexity level. Use queuing theory for quick estimates of well-structured problems and simulation for detailed, realistic modeling where assumptions don't hold.
Planning and Scheduling Tools
These techniques structure complex projects and manage resources across time. The focus shifts from finding optimal values to coordinating activities and managing timelines.
Project Management (PERT/CPM)
CPM (Critical Path Method) identifies the critical path, the longest sequence of dependent tasks that determines minimum project duration. Any delay on a critical path activity delays the entire project.
PERT (Program Evaluation and Review Technique) incorporates uncertainty by using three time estimates per activity:
a = optimistic time
m = most likely time
b = pessimistic time
Expected time: teโ=6a+4m+bโ
Variance: ฯ2=(6bโaโ)2
Float (slack) analysis reveals which activities can be delayed without affecting the project deadline. Activities with zero float are on the critical path. This information is essential for resource leveling, where you shift non-critical tasks to smooth out resource demand.
Inventory Management
EOQ (Economic Order Quantity) balances ordering costs against holding costs to find the order size that minimizes total inventory cost:
EOQ=H2DSโโ
where D is annual demand, S is the fixed cost per order, and H is the annual holding cost per unit. The model assumes constant demand and instantaneous replenishment.
Reorder point (ROP) models determine when to place orders based on lead time demand. With variable demand, you add safety stock: ROP=dหโ L+zโ ฯdโโ Lโ, where dห is average daily demand, L is lead time, z is the service level z-score, and ฯdโ is the standard deviation of daily demand.
JIT (Just-In-Time) systems minimize inventory by synchronizing supply with production. This reduces holding costs but increases vulnerability to supply chain disruptions, since there's little buffer stock to absorb delays.
Compare: PERT vs. CPM: both identify critical paths, but CPM assumes deterministic (known) activity times while PERT treats durations as random variables with a beta distribution. Use CPM for routine projects with predictable tasks; use PERT when time estimates are uncertain.
Decision Support and Prediction
These techniques help engineers make better choices under uncertainty and anticipate future conditions. The emphasis is on structuring decisions and extracting patterns from data.
Decision Analysis
Decision trees structure sequential choices by mapping alternatives, chance events, and outcomes into a visual framework. Square nodes represent decisions, circle nodes represent chance events, and branches show possible paths.
Expected value (EV) calculations weight outcomes by their probabilities:
EV=โpiโโ viโ
where piโ is the probability of outcome i and viโ is the payoff. You solve decision trees by folding back from right to left: calculate expected values at chance nodes, then pick the best alternative at decision nodes.
Sensitivity analysis tests how robust your decision is to changes in assumptions. If a small shift in one probability flips the best decision, that parameter deserves careful estimation.
Forecasting Techniques
Time series methods extrapolate patterns from historical data. Moving averages smooth out noise by averaging recent periods. Exponential smoothing gives more weight to recent observations using a smoothing constant ฮฑ (between 0 and 1). These work best when past behavior is a reliable predictor of the future.
Regression models identify relationships between a dependent variable and one or more independent variables. Unlike time series methods, regression can incorporate causal factors (e.g., predicting sales based on advertising spend and price).
Forecast accuracy metrics quantify prediction error and guide method selection:
MAD (Mean Absolute Deviation): average of absolute errors
MSE (Mean Squared Error): average of squared errors, penalizes large errors more heavily
MAPE (Mean Absolute Percentage Error): expresses error as a percentage of actual values, making it easier to compare across different scales
Compare: Decision Analysis vs. Forecasting: decision analysis helps you choose among alternatives given uncertain outcomes, while forecasting predicts what those outcomes might be. They're complementary: use forecasting to estimate probabilities, then plug those into decision trees.
Quick Reference Table
Concept
Best Examples
Continuous Optimization
Linear Programming, Network Analysis
Discrete Optimization
Integer Programming
Uncertainty Modeling
Queuing Theory, Markov Chains, Simulation
Project Planning
PERT, CPM
Inventory Control
EOQ, JIT, Reorder Point Models
Decision Making Under Uncertainty
Decision Trees, Expected Value Analysis
Prediction and Estimation
Time Series, Regression, Monte Carlo Simulation
Self-Check Questions
A company needs to determine how many warehouses to open and where to locate them. Would you use linear programming or integer programming? Why?
Compare queuing theory and simulation: under what conditions would you choose simulation over an analytical queuing model?
A project manager knows task durations precisely from past experience. Should they use PERT or CPM, and what's the key difference?
Which two techniques both use probability distributions but serve fundamentally different purposes: one for modeling system states over time and one for structuring choices?
A manufacturing company faces highly variable customer demand and complex production constraints. Which technique allows testing multiple scenarios without real-world disruption, and why is it preferred over deterministic optimization here?