upgrade
upgrade

🔢Numerical Analysis II

Fundamental Numerical Integration Methods

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Numerical integration sits at the heart of computational mathematics—whenever you can't find a closed-form antiderivative (which is most of the time in real applications), these methods become your primary tools. In Numerical Analysis II, you're being tested on more than just applying formulas; you need to understand error behavior, convergence rates, and when to choose one method over another. The concepts here connect directly to polynomial interpolation, Taylor series error bounds, and the broader theme of approximation theory.

What separates strong exam performance from mediocre recall is understanding the why behind each method. Why does Simpson's Rule outperform the Trapezoidal Rule? Why might you abandon classical methods entirely for Monte Carlo in high dimensions? Don't just memorize the formulas—know what order of accuracy each method achieves, what assumptions it requires, and what trade-offs it makes between function evaluations and precision.


Polynomial-Based Interpolation Methods

These foundational methods approximate the integrand using polynomials of increasing degree. The key insight: higher-degree polynomial interpolation over subintervals generally yields faster error convergence—but with important caveats.

Rectangle Rule (Midpoint Rule)

  • Uses constant (degree-0) polynomial approximation—the function value at each subinterval's midpoint determines a rectangle's height
  • Error is O(h2)O(h^2) where hh is the subinterval width—surprisingly, this matches the Trapezoidal Rule despite its simplicity
  • Best starting point for understanding numerical integration; the midpoint choice actually provides better accuracy than left or right endpoint versions

Trapezoidal Rule

  • Linear (degree-1) interpolation connects function values at subinterval endpoints to form trapezoids
  • Error is O(h2)O(h^2) with leading term h212f(ξ)-\frac{h^2}{12}f''(\xi)—the second derivative controls accuracy
  • Exact for linear functions and performs well on smooth, non-oscillatory integrands where curvature is mild

Simpson's Rule

  • Quadratic (degree-2) polynomial interpolation over pairs of subintervals, requiring an even number of subintervals
  • Error is O(h4)O(h^4)—two orders better than Trapezoidal, making it dramatically more efficient for smooth functions
  • Exact for polynomials up to degree 3 (not just degree 2!) due to symmetry properties of the method

Compare: Trapezoidal Rule vs. Simpson's Rule—both use endpoint values, but Simpson's adds the midpoint and weights them (1, 4, 1 pattern) to fit parabolas instead of lines. If an FRQ asks you to justify choosing Simpson's, cite the O(h4)O(h^4) vs. O(h2)O(h^2) error improvement.

Newton-Cotes Formulas

  • General family using equally spaced nodes—Rectangle, Trapezoidal, and Simpson's are all special cases (closed formulas of degree 0, 1, and 2)
  • Higher-degree formulas exist but suffer from Runge's phenomenon—oscillatory instability that can make errors increase with more nodes
  • Open vs. closed variants: closed formulas include endpoints; open formulas (like midpoint) exclude them, useful when endpoint values are undefined

Extrapolation and Refinement Techniques

These methods build on basic rules by systematically combining approximations to cancel error terms. The principle: if you know the structure of the error, you can eliminate leading terms through clever combinations.

Romberg Integration

  • Applies Richardson extrapolation to Trapezoidal Rule estimates—combines results from successively halved step sizes to cancel error terms
  • Builds a triangular table where each column eliminates another power of hh from the error expansion
  • Achieves high accuracy efficiently for smooth functions; the diagonal entries converge rapidly, often matching Simpson's and beyond

Composite Integration Methods

  • Subdivides the interval and applies a basic rule to each piece—Composite Simpson's applies Simpson's Rule to consecutive subinterval pairs
  • Error formulas scale with total interval width and subinterval count: Composite Trapezoidal has error O(h2)O(h^2), Composite Simpson's has O(h4)O(h^4)
  • Essential for practical implementation—single-interval formulas rarely provide sufficient accuracy; composition is how you actually use these methods

Compare: Romberg Integration vs. Composite Simpson's—both achieve high accuracy, but Romberg automatically improves order through extrapolation while Composite Simpson's requires you to choose the subdivision count upfront. Romberg is more "set and forget" for well-behaved functions.


Optimal Node Selection Methods

Rather than using equally spaced points, these methods choose nodes strategically to maximize accuracy per function evaluation. The insight: optimal node placement can achieve exponential convergence, far outperforming fixed-spacing methods.

Gaussian Quadrature

  • Chooses nodes as roots of orthogonal polynomials (Legendre, Chebyshev, etc.) with corresponding optimal weights
  • nn-point Gaussian quadrature is exact for polynomials of degree 2n12n-1—twice what you'd expect from nn points
  • Achieves exponential convergence for analytic functions, making it the gold standard when function evaluations are expensive

Adaptive Quadrature

  • Dynamically refines subintervals based on local error estimates—subdivides where the function is "difficult" and leaves easy regions coarse
  • Compares estimates from different rules (e.g., Simpson's vs. refined Simpson's) to estimate local error and guide subdivision
  • Balances accuracy and efficiency by concentrating computational effort where it's needed most—essential for functions with localized complexity

Compare: Gaussian Quadrature vs. Adaptive Quadrature—Gaussian optimizes node placement globally assuming smooth behavior, while Adaptive adjusts locally to handle varying function complexity. For integrands with sharp peaks or discontinuities, Adaptive wins; for smooth analytic functions, Gaussian's efficiency is hard to beat.


Probabilistic and High-Dimensional Methods

When deterministic methods become impractical—especially in high dimensions—randomized approaches offer a fundamentally different strategy. The key: Monte Carlo convergence depends on sample count, not dimension, breaking the "curse of dimensionality."

Monte Carlo Integration

  • Estimates integrals via random sampling: approximate fdVV1Ni=1Nf(xi)\int f \, dV \approx V \cdot \frac{1}{N}\sum_{i=1}^{N} f(x_i) where xix_i are random points
  • Convergence rate is O(N1/2)O(N^{-1/2}) regardless of dimension—slow compared to deterministic methods in 1D, but unbeatable in high dimensions
  • Variance reduction techniques (importance sampling, stratification, control variates) can dramatically improve practical performance

Compare: Monte Carlo vs. Gaussian Quadrature—in one dimension, Gaussian's exponential convergence crushes Monte Carlo's O(N1/2)O(N^{-1/2}) rate. But in 10+ dimensions, Gaussian requires an astronomically growing number of nodes while Monte Carlo's rate stays constant. Dimensionality determines which method wins.


Error Analysis and Convergence Theory

Understanding error behavior isn't just theoretical—it's how you choose methods, set parameters, and verify results. The principle: error expansions reveal both the rate of convergence and the conditions under which methods succeed or fail.

Error Analysis and Convergence Rates

  • Order of accuracy O(hp)O(h^p) means error scales as the pp-th power of step size—doubling subintervals reduces error by factor 2p2^p
  • Error bounds depend on derivative magnitudes: Trapezoidal error involves ff'', Simpson's involves f(4)f^{(4)}—smooth functions benefit from higher-order methods
  • Asymptotic vs. pre-asymptotic behavior: theoretical rates apply only for sufficiently small hh; practical problems may not reach the asymptotic regime

Quick Reference Table

ConceptBest Examples
Polynomial interpolation (low degree)Rectangle Rule, Trapezoidal Rule
Polynomial interpolation (higher degree)Simpson's Rule, Newton-Cotes Formulas
Error extrapolationRomberg Integration
Optimal node placementGaussian Quadrature
Adaptive refinementAdaptive Quadrature, Composite Methods
High-dimensional integrationMonte Carlo Integration
Error order O(h2)O(h^2)Rectangle (Midpoint), Trapezoidal
Error order O(h4)O(h^4) or betterSimpson's Rule, Romberg, Gaussian

Self-Check Questions

  1. Both the Midpoint Rule and Trapezoidal Rule have O(h2)O(h^2) error. What geometric difference explains why they achieve the same order despite different constructions?

  2. Simpson's Rule is exact for polynomials up to degree 3, not just degree 2. What property of the method causes this "bonus" degree of exactness?

  3. Compare Romberg Integration and Adaptive Quadrature: under what conditions would you prefer each, and what assumption does Romberg make that Adaptive does not?

  4. A colleague suggests using 10-point Gaussian quadrature for a 15-dimensional integral. Explain why this is impractical and what method you would recommend instead.

  5. You're integrating a function with a sharp spike near x=0.3x = 0.3 but smooth behavior elsewhere. Would Composite Simpson's with uniform subintervals or Adaptive Quadrature be more efficient? Justify your answer in terms of error distribution.