Predictor-corrector methods combine explicit and implicit multistep formulas to solve ordinary differential equations. The explicit formula generates a rough estimate (the prediction), and the implicit formula refines it (the correction). This two-phase approach gives you better accuracy than a purely explicit method while avoiding the full cost of solving the nonlinear system that a purely implicit method would require.
This section covers the main predictor-corrector families, how the predict-correct cycle works, and the stability and convergence theory behind them.
Predictor-corrector method overview
The core idea is straightforward: use an explicit multistep method to predict , then feed that prediction into an implicit multistep method to correct it. Because the implicit formula already has a good initial guess from the predictor, you typically need only one or two iterations of the corrector rather than solving a full nonlinear system.
This makes predictor-corrector methods a natural bridge between explicit methods (cheap but less stable) and implicit methods (stable but expensive). They're a fundamental tool in Numerical Analysis II because they illustrate how you can combine methods strategically to get the best of both worlds.
Types of predictor-corrector methods
Adams-Bashforth-Moulton methods
The most widely used predictor-corrector family pairs an explicit Adams-Bashforth formula (predictor) with an implicit Adams-Moulton formula (corrector). Both formulas use previously computed solution values and derivative evaluations to estimate the next point.
- The predictor and corrector are typically chosen to be the same order, or the corrector is one order higher
- Only one new function evaluation is needed per step (at the predicted value), making these efficient
- Variable-order implementations exist that adapt the polynomial degree during integration for automatic error control
- Well-suited for initial value problems where the solution is reasonably smooth
Milne-Simpson methods
This pairing uses Milne's method as the predictor and Simpson's rule as the corrector. Both are based on integrating an interpolating polynomial through known function values.
- Simpson's corrector achieves fourth-order accuracy, which is attractive for its relatively low cost
- The downside: Milne-Simpson is weakly unstable for certain problems. Parasitic solutions can grow over long integrations, even when the step size is within the absolute stability region. This is a classic example of why stability analysis matters beyond just looking at order of accuracy.
- Requires more starting values than Adams methods of comparable order
Hamming's method
Hamming modified the Milne-Simpson approach specifically to fix its stability weakness.
- Introduces a weighted combination of predicted and corrected values that damps the parasitic solution Milne-Simpson suffers from
- Sacrifices a small amount of accuracy compared to Milne-Simpson in exchange for significantly better stability
- Provides a built-in error estimate from the difference between predicted and corrected values
Predictor step
Explicit methods
The predictor uses an explicit formula, meaning depends only on already-known quantities. The general Adams-Bashforth predictor takes the form:
Here are derivative evaluations at previous steps, and the coefficients are determined by requiring the formula to be exact for polynomials up to a certain degree. For example, the fourth-order Adams-Bashforth predictor () has coefficients , , , .
The prediction is cheap to compute since no iteration is needed, but it's less accurate than the corrected value that follows.
Extrapolation techniques
Beyond standard Adams-Bashforth formulas, you can predict by extrapolating from known solution values using polynomial interpolation (Lagrange or Newton forms). Richardson extrapolation can further improve accuracy by combining predictions at different step sizes to cancel leading error terms. These techniques are more specialized and computationally expensive, but useful when higher accuracy is needed from the predictor.
Corrector step
Implicit methods
The corrector uses an implicit formula where the unknown appears on both sides. The general Adams-Moulton corrector is:
The term means the sum includes , which is what makes this implicit. The key insight: instead of solving this nonlinear equation from scratch, you substitute the predicted value to evaluate , giving you a corrected value directly.
Iteration process
The standard approach is called PECE (Predict-Evaluate-Correct-Evaluate):
- P: Compute using the predictor formula
- E: Evaluate
- C: Compute using the corrector formula with
- E: Evaluate for use in the next step
You can iterate the corrector (PECECE...) for additional refinement, using fixed-point iteration. Convergence is guaranteed when is sufficiently small. In practice, one correction is often enough because the predictor already provides a good starting guess, and iterating further doesn't improve the method's order of accuracy.
Order of accuracy
Local truncation error
The local truncation error (LTE) is the error introduced in a single step, assuming all previous values are exact. For a -th order method, the LTE is:
You can derive this by expanding the true solution in a Taylor series and comparing it to the numerical formula. The LTE of the predictor and corrector are typically different. Their difference provides a convenient, cheap error estimate:
where is a known constant depending on the specific predictor-corrector pair.
Global truncation error
The global truncation error accumulates local errors over the entire integration interval . For a -th order method:
This is one order lower than the local error because errors accumulate over roughly steps. The global error is what ultimately determines how accurate your final answer is.
Stability analysis
Absolute stability
Stability analysis determines whether small perturbations grow or decay as integration proceeds. You test this using the model equation , where is a complex constant.
Substituting into the predictor-corrector pair gives a recurrence relation, and the method is absolutely stable for a given if all solutions of that recurrence remain bounded. The set of all stable values forms the stability region in the complex plane.
- Adams-Bashforth-Moulton methods have finite stability regions (they are not A-stable)
- Higher-order Adams methods have smaller stability regions, which is a practical limitation
- Milne-Simpson's stability region has a gap that allows parasitic growth, which is why Hamming's modification exists
Relative stability
Relative stability goes beyond asking "do errors stay bounded?" to asking "does the numerical solution preserve the qualitative behavior of the true solution?" For instance, if the true solution oscillates, does the numerical solution oscillate at roughly the right frequency? If the true solution decays, does the numerical solution decay at a comparable rate?
This is analyzed using the logarithmic norm of the Jacobian and is particularly important for oscillatory or multi-scale problems where you need the numerical method to track the solution's character, not just its magnitude.
Implementation considerations
Starting values
Multistep methods need previous values to begin, but an initial value problem only provides . You need to generate using another method. The standard approach:
- Use a single-step method (typically a Runge-Kutta method of matching order) to compute the first values
- Ensure the starting method has at least the same order of accuracy as the predictor-corrector pair
- Once enough values are available, switch to the multistep predictor-corrector
If your starting values are less accurate than the multistep method, that lower accuracy can contaminate the entire solution.

Step size selection
Choosing the step size involves balancing accuracy against computational cost.
- Initial step size: Often estimated from the problem's timescale or by taking a trial step and checking the error
- Adaptive control: The error estimate from drives automatic step size adjustment. If the estimated error exceeds a tolerance, the step is rejected and retried with a smaller . If the error is well below tolerance, can be increased.
- Changing step size in multistep methods is nontrivial because the formulas assume equally spaced points. You either need to re-derive coefficients for the new spacing or use the Nordsieck representation (discussed below).
Advantages vs disadvantages
Comparison with Runge-Kutta methods
| Feature | Predictor-Corrector | Runge-Kutta |
|---|---|---|
| Function evaluations per step | ~2 (PECE mode) | 4+ for fourth order |
| Self-starting | No (needs startup procedure) | Yes |
| Error estimation | Built-in (from P-C difference) | Requires embedded pair |
| Step size changes | Complicated | Easy |
| Discontinuity handling | Poor (relies on past values) | Better (single-step) |
| Predictor-corrector methods shine when you need many steps across a smooth solution, because the cost per step is lower. Runge-Kutta methods are more flexible and simpler to implement, especially when the step size needs to change frequently or the solution has discontinuities. |
Efficiency considerations
The main efficiency advantage of predictor-corrector methods is fewer function evaluations per step. Each PECE cycle requires only 2 evaluations of , compared to 4 for a classical fourth-order Runge-Kutta step. For problems where evaluating is expensive (large systems, complex physics), this savings adds up.
The tradeoff is higher memory usage (you must store previous solution values and derivative evaluations) and the added complexity of startup and step size changes.
Applications in ODEs
Initial value problems
Predictor-corrector methods are most commonly applied to initial value problems of the form , . They're particularly effective for problems with smooth, non-stiff solutions over long integration intervals, such as:
- Orbital mechanics (planetary motion, satellite trajectories)
- Chemical kinetics with moderate timescale separation
- Population dynamics and compartmental models in epidemiology
Boundary value problems
Predictor-corrector methods can be adapted for boundary value problems through shooting methods: you convert the BVP into a sequence of IVPs by guessing unknown initial conditions, solving forward with a predictor-corrector integrator, and iterating on the guess until the boundary conditions are satisfied. This approach is common in heat transfer, fluid dynamics, and structural analysis problems.
Error estimation techniques
Richardson extrapolation
Richardson extrapolation estimates the error by solving the same problem with two different step sizes ( and ) and combining the results. If the method has order , the leading error term cancels:
This gives both a more accurate solution and a reliable error estimate, at the cost of roughly doubling the computational work.
Embedded formulas
Predictor-corrector methods have a natural embedded error estimate: the difference between the corrected and predicted values. Since the corrector is typically one order higher than the predictor, this difference approximates the local truncation error of the predictor (up to a known constant). This estimate comes essentially for free and is what drives adaptive step size control.
Adaptive step size control
Error per step vs error per unit step
Two common strategies for controlling the local error:
- Error per step (EPS): require . This bounds the error at each step but can over-constrain the problem when step sizes vary widely.
- Error per unit step (EPUS): require . This normalizes by step size, giving more uniform accuracy per unit of the independent variable. EPUS is generally preferred for problems with varying timescales.
Step size adjustment algorithms
Once you have an error estimate at step , the new step size is typically chosen as:
A safety factor (e.g., 0.8 or 0.9) is applied to avoid repeatedly overshooting the tolerance. More advanced controllers like PI (Proportional-Integral) control also use the error from the previous step to smooth out step size oscillations:
where and are tuning parameters. This prevents the "step size hunting" that simple controllers can exhibit.
Multistep methods connection
Relationship to linear multistep methods
Predictor-corrector methods are specific implementations of the general linear multistep method (LMM):
The predictor is an LMM with (explicit), and the corrector is an LMM with (implicit). Adams methods are the special case where , , and all other , meaning only and appear on the left side.
Nordsieck form
The Nordsieck vector stores the solution and its scaled derivatives at the current point:
Advancing one step becomes a matrix-vector multiply, and changing the step size only requires rescaling the vector entries. This makes variable step size and variable order implementations much cleaner than reformulating the Adams coefficients directly. The Nordsieck form is the basis for many production ODE solvers.
Convergence analysis
Consistency conditions
A method is consistent if it reproduces the ODE exactly as . For a linear multistep method, consistency requires two conditions on the coefficients:
- (the method sums to zero when applied to a constant)
- (the method is exact for )
Higher-order consistency (exactness for ) gives additional equations that determine the coefficients for higher-order methods.
Zero-stability requirements
Zero-stability ensures the method doesn't amplify errors even for the trivial equation . It's analyzed through the first characteristic polynomial :
- All roots of must satisfy
- Any root with must be simple (not repeated)
This is the root condition. The Dahlquist equivalence theorem ties everything together: a consistent, zero-stable linear multistep method is convergent. Consistency alone is not enough, and zero-stability alone is not enough. You need both.
Milne-Simpson satisfies the root condition (it is zero-stable), but its stability region has a problematic structure that allows parasitic solutions to grow for certain values. This is a different issue from zero-stability and illustrates why absolute stability analysis is also necessary in practice.