Gaussian Quadrature for Integration
Fundamental Principles and Advantages
Gaussian quadrature approximates definite integrals by choosing evaluation points (nodes) and weights that maximize accuracy for a given number of function evaluations. Unlike Newton-Cotes methods that use equally spaced points, Gaussian quadrature places nodes strategically to extract more information from each evaluation.
The central result is this: with nodes, Gaussian quadrature exactly integrates polynomials of degree or less. That's roughly twice the polynomial degree you'd expect from points, and it's the reason the method is so efficient. You get both the node locations and the weights as free parameters, so you have unknowns to satisfy polynomial exactness conditions.
For smooth, well-behaved functions, this translates to high accuracy with relatively few function evaluations. The tradeoff is that the nodes are no longer equally spaced, so you can't reuse function values if you increase .
Efficiency and Applications
Gaussian quadrature is commonly used across scientific computing:
- Computational physics: quantum mechanics calculations, scattering problems
- Engineering: structural analysis via finite element methods, heat transfer
- Finance: option pricing models requiring numerical integration
- Statistics: evaluating probability distributions that lack closed-form CDFs
The method adapts to different problems by choosing an appropriate weight function. Each weight function corresponds to a different family of Gaussian quadrature (covered in detail below).
Gaussian Quadrature Formula and Orthogonal Polynomials

General Formula and Components
The general Gaussian quadrature formula is:
where:
- is the weight function (nonnegative, integrable over )
- is the integrand you want to approximate
- are the nodes (abscissas), determined as roots of the degree- orthogonal polynomial associated with
- are the weights, computed so the formula is exact for polynomials up to degree
The weights can be expressed through Lagrange basis polynomials. Specifically, , where is the -th Lagrange interpolating polynomial built on the nodes. This construction guarantees the maximum degree of precision.
Role of Orthogonal Polynomials
Orthogonal polynomials are what make Gaussian quadrature work. A family of polynomials is orthogonal with respect to on if:
The nodes of -point Gaussian quadrature are the roots of . This choice is what pushes the exactness from degree (which any interpolatory rule achieves) up to degree .
Different families correspond to different quadrature types:
| Polynomial Family | Weight Function | Interval |
|---|---|---|
| Legendre | ||
| Chebyshev (1st kind) | ||
| Hermite | ||
| Laguerre | ||
| Jacobi | ||
| These polynomials all satisfy a three-term recurrence relation of the form: |
This recurrence is essential in practice because it lets you compute nodes and weights efficiently and with good numerical stability, rather than finding polynomial roots from explicit coefficient formulas.
Gaussian Quadrature Accuracy vs Other Methods

Convergence Properties
For analytic (infinitely smooth) functions, Gaussian quadrature exhibits exponential convergence: the error decreases exponentially as you increase . This is dramatically faster than the algebraic convergence of Newton-Cotes rules.
The error for -point Gaussian quadrature applied to a function has the form:
for some . Notice the -th derivative appears, confirming exactness for polynomials of degree up to .
Comparative Analysis
Here's how the error scaling compares for a fixed step size or number of nodes:
- Trapezoidal rule: error , exact for polynomials up to degree 1
- Simpson's rule: error , exact for polynomials up to degree 3
- -point Gaussian quadrature: exact for polynomials up to degree
For smooth functions, Gaussian quadrature needs far fewer evaluations to reach a target accuracy. However, the method can struggle with:
- Singularities within or at the endpoints of the integration interval
- Rapid oscillations that aren't well-captured by a polynomial approximation
- Discontinuities in the integrand or its derivatives
When these issues arise, two practical strategies help:
- Composite Gaussian quadrature: subdivide the interval into smaller subintervals and apply Gaussian quadrature on each one
- Adaptive Gaussian quadrature: automatically refine the subintervals where the integrand is difficult, concentrating effort where it's needed
The Lebesgue constant, which measures how the interpolation error amplifies, grows slowly for Gaussian nodes. This contributes to the method's numerical stability compared to equally spaced interpolation, where the Lebesgue constant grows exponentially (the Runge phenomenon).
Applying Gaussian Quadrature to Integrals
Specific Gaussian Quadrature Types
Gauss-Legendre quadrature is the most commonly used variant. It has weight function on . To apply it to an arbitrary interval , use the linear change of variables:
which transforms the integral as:
You then apply the standard Gauss-Legendre nodes and weights to the right-hand side.
Gauss-Chebyshev quadrature uses weight function on . It's well-suited for integrands of the form , where is smooth. The nodes and weights have closed-form expressions, which is a nice computational advantage. Example: computing .
Gauss-Hermite quadrature uses on . This is natural for problems involving Gaussian-type integrands, such as quantum mechanics expectation values. Example: evaluating .
Advanced Techniques and Applications
Gauss-Laguerre quadrature uses on , making it appropriate for integrands with exponential decay. Example: .
Gauss-Jacobi quadrature generalizes several of the above. Its weight function is on , with parameters and . Legendre corresponds to , and Chebyshev to . Adjusting and lets you handle endpoint singularities of algebraic type directly within the quadrature framework.
Composite Gaussian quadrature subdivides a large interval into smaller subintervals and applies Gaussian quadrature on each. This is useful when:
- The interval is large and the integrand varies significantly across it
- The function has localized features (sharp peaks, near-singularities)
For example, to evaluate , you'd split into subintervals and apply Gauss-Legendre on each, summing the results.
Adaptive Gaussian quadrature takes this further by automatically choosing where to refine. It estimates the error on each subinterval and subdivides only where the error is large. This is especially valuable for integrands like , where the singularity at demands more nodes nearby but the rest of the interval is smooth.