Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Numerical integration is what you turn to whenever a closed-form antiderivative doesn't exist, which is most of the time in real applications. In Numerical Analysis II, you're expected to go beyond applying formulas: you need to understand error behavior, convergence rates, and when to choose one method over another. These concepts tie directly into polynomial interpolation, Taylor series error bounds, and approximation theory more broadly.
What separates strong exam performance from mediocre recall is understanding the why behind each method. Why does Simpson's Rule outperform the Trapezoidal Rule? Why might you abandon classical methods entirely for Monte Carlo in high dimensions? For each method, know what order of accuracy it achieves, what assumptions it requires, and what trade-offs it makes between function evaluations and precision.
These foundational methods approximate the integrand using polynomials of increasing degree over subintervals. Higher-degree polynomial interpolation generally yields faster error convergence, but there are important caveats once the degree gets large enough.
The Midpoint Rule approximates the integrand as a constant (degree-0) polynomial on each subinterval, using the function value at the midpoint as the rectangle's height.
The Trapezoidal Rule uses linear (degree-1) interpolation, connecting function values at subinterval endpoints to form trapezoids.
Simpson's Rule fits a quadratic (degree-2) polynomial through three equally spaced points (both endpoints and the midpoint of each pair of subintervals). This requires an even number of subintervals.
Compare: Trapezoidal Rule vs. Simpson's Rule: both use endpoint values, but Simpson's adds the midpoint and weights them (1, 4, 1 pattern per panel) to fit parabolas instead of lines. If you're asked to justify choosing Simpson's, cite the vs. error improvement.
The Newton-Cotes family is the general framework using equally spaced nodes. The Rectangle, Trapezoidal, and Simpson's Rules are all special cases (closed formulas using degree-0, 1, and 2 interpolation, respectively).
These methods build on basic rules by systematically combining approximations to cancel error terms. The core principle: if you know the structure of the error expansion, you can eliminate leading terms through careful linear combinations.
Romberg Integration applies Richardson extrapolation to the composite Trapezoidal Rule. It exploits the fact that the Trapezoidal Rule's error has an expansion in even powers of :
By computing for successively halved step sizes (), you build a triangular table where each new column eliminates another power of from the error.
Composite methods subdivide the full interval and apply a basic rule to each subinterval (or pair of subintervals for Simpson's).
Compare: Romberg Integration vs. Composite Simpson's: both achieve high accuracy, but Romberg automatically improves order through extrapolation, while Composite Simpson's requires you to choose the number of subintervals upfront. Romberg is more "set and forget" for well-behaved (sufficiently smooth) functions, but it can struggle if the integrand lacks the smooth error expansion it assumes.
Rather than using equally spaced points, these methods choose node locations strategically to maximize accuracy per function evaluation. Optimal node placement can achieve exponential convergence for analytic functions, far outperforming fixed-spacing methods.
Gaussian quadrature selects nodes as roots of orthogonal polynomials (Legendre polynomials for standard intervals, Chebyshev, Laguerre, or Hermite for other weight functions) along with corresponding optimal weights.
Adaptive quadrature dynamically refines subintervals based on local error estimates, concentrating computational effort where the integrand is hardest to approximate.
The typical procedure:
This approach is essential for functions with localized complexity (sharp peaks, rapid oscillations in a small region, near-singularities) where uniform spacing would waste evaluations on smooth regions.
Compare: Gaussian Quadrature vs. Adaptive Quadrature: Gaussian optimizes node placement globally, assuming smooth behavior throughout. Adaptive adjusts locally to handle varying complexity. For integrands with sharp peaks or near-discontinuities, Adaptive wins. For smooth analytic functions on a fixed interval, Gaussian's efficiency is hard to beat.
When deterministic methods become impractical, especially in high dimensions, randomized approaches offer a fundamentally different strategy. Monte Carlo convergence depends on sample count, not dimension, which breaks the curse of dimensionality.
Monte Carlo integration estimates integrals via random sampling:
where are points sampled uniformly at random from the domain and is the volume of that domain.
Compare: Monte Carlo vs. Gaussian Quadrature: in one dimension, Gaussian's exponential convergence dominates Monte Carlo's rate. But in 10+ dimensions, an -point tensor-product Gaussian rule requires evaluations (where is the dimension), making it astronomically expensive. Monte Carlo's rate stays regardless. Dimensionality determines which method wins.
Understanding error behavior is how you choose methods, set parameters, and verify results. Error expansions reveal both the rate of convergence and the conditions under which methods succeed or fail.
| Concept | Best Examples |
|---|---|
| Polynomial interpolation (low degree) | Rectangle Rule, Trapezoidal Rule |
| Polynomial interpolation (higher degree) | Simpson's Rule, Newton-Cotes Formulas |
| Error extrapolation | Romberg Integration |
| Optimal node placement | Gaussian Quadrature |
| Adaptive refinement | Adaptive Quadrature, Composite Methods |
| High-dimensional integration | Monte Carlo Integration |
| Error order | Rectangle (Midpoint), Trapezoidal |
| Error order or better | Simpson's Rule, Romberg, Gaussian |
Both the Midpoint Rule and Trapezoidal Rule have error. What geometric or symmetry argument explains why the Midpoint Rule achieves the same order as the Trapezoidal Rule despite using degree-0 (rather than degree-1) interpolation?
Simpson's Rule is exact for polynomials up to degree 3, not just degree 2. What property of the method causes this "bonus" degree of exactness?
Compare Romberg Integration and Adaptive Quadrature: under what conditions would you prefer each, and what assumption does Romberg make that Adaptive does not?
A colleague suggests using 10-point Gaussian quadrature for a 15-dimensional integral. Explain why this is impractical and what method you would recommend instead.
You're integrating a function with a sharp spike near but smooth behavior elsewhere. Would Composite Simpson's with uniform subintervals or Adaptive Quadrature be more efficient? Justify your answer in terms of how each method distributes its error budget.