Fiveable

🔢Numerical Analysis I Unit 13 Review

QR code for Numerical Analysis I practice questions

13.1 Gaussian Quadrature Theory

13.1 Gaussian Quadrature Theory

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🔢Numerical Analysis I
Unit & Topic Study Guides

Gaussian Quadrature for Integration

Fundamental Principles and Advantages

Gaussian quadrature approximates definite integrals by choosing evaluation points (nodes) and weights that maximize accuracy for a given number of function evaluations. Unlike Newton-Cotes methods that use equally spaced points, Gaussian quadrature places nodes strategically to extract more information from each evaluation.

The central result is this: with nn nodes, Gaussian quadrature exactly integrates polynomials of degree 2n12n - 1 or less. That's roughly twice the polynomial degree you'd expect from nn points, and it's the reason the method is so efficient. You get both the node locations xix_i and the weights wiw_i as free parameters, so you have 2n2n unknowns to satisfy 2n2n polynomial exactness conditions.

For smooth, well-behaved functions, this translates to high accuracy with relatively few function evaluations. The tradeoff is that the nodes are no longer equally spaced, so you can't reuse function values if you increase nn.

Efficiency and Applications

Gaussian quadrature is commonly used across scientific computing:

  • Computational physics: quantum mechanics calculations, scattering problems
  • Engineering: structural analysis via finite element methods, heat transfer
  • Finance: option pricing models requiring numerical integration
  • Statistics: evaluating probability distributions that lack closed-form CDFs

The method adapts to different problems by choosing an appropriate weight function. Each weight function corresponds to a different family of Gaussian quadrature (covered in detail below).

Gaussian Quadrature Formula and Orthogonal Polynomials

Fundamental Principles and Advantages, GaussianQuadratureWeights | Wolfram Function Repository

General Formula and Components

The general Gaussian quadrature formula is:

abw(x)f(x)dxi=1nwif(xi)\int_{a}^{b} w(x)f(x)\,dx \approx \sum_{i=1}^{n} w_i f(x_i)

where:

  • w(x)w(x) is the weight function (nonnegative, integrable over [a,b][a,b])
  • f(x)f(x) is the integrand you want to approximate
  • xix_i are the nodes (abscissas), determined as roots of the degree-nn orthogonal polynomial associated with w(x)w(x)
  • wiw_i are the weights, computed so the formula is exact for polynomials up to degree 2n12n - 1

The weights can be expressed through Lagrange basis polynomials. Specifically, wi=abw(x)Li(x)dxw_i = \int_a^b w(x) L_i(x)\,dx, where Li(x)L_i(x) is the ii-th Lagrange interpolating polynomial built on the nodes. This construction guarantees the maximum degree of precision.

Role of Orthogonal Polynomials

Orthogonal polynomials are what make Gaussian quadrature work. A family of polynomials {P0,P1,P2,}\{P_0, P_1, P_2, \ldots\} is orthogonal with respect to w(x)w(x) on [a,b][a,b] if:

abw(x)Pm(x)Pk(x)dx=0for mk\int_a^b w(x) P_m(x) P_k(x)\,dx = 0 \quad \text{for } m \neq k

The nodes of nn-point Gaussian quadrature are the roots of Pn(x)P_n(x). This choice is what pushes the exactness from degree n1n - 1 (which any interpolatory rule achieves) up to degree 2n12n - 1.

Different families correspond to different quadrature types:

Polynomial FamilyWeight Function w(x)w(x)Interval
Legendre11[1,1][-1, 1]
Chebyshev (1st kind)11x2\frac{1}{\sqrt{1 - x^2}}[1,1][-1, 1]
Hermiteex2e^{-x^2}(,)(-\infty, \infty)
Laguerreexe^{-x}[0,)[0, \infty)
Jacobi(1x)α(1+x)β(1-x)^\alpha(1+x)^\beta[1,1][-1, 1]
These polynomials all satisfy a three-term recurrence relation of the form:

Pk+1(x)=(Akx+Bk)Pk(x)CkPk1(x)P_{k+1}(x) = (A_k x + B_k)P_k(x) - C_k P_{k-1}(x)

This recurrence is essential in practice because it lets you compute nodes and weights efficiently and with good numerical stability, rather than finding polynomial roots from explicit coefficient formulas.

Gaussian Quadrature Accuracy vs Other Methods

Fundamental Principles and Advantages, GaussianQuadratureWeights | Wolfram Function Repository

Convergence Properties

For analytic (infinitely smooth) functions, Gaussian quadrature exhibits exponential convergence: the error decreases exponentially as you increase nn. This is dramatically faster than the algebraic convergence of Newton-Cotes rules.

The error for nn-point Gaussian quadrature applied to a function ff has the form:

En(f)=f(2n)(ξ)(2n)!abw(x)[Pn(x)]2dxE_n(f) = \frac{f^{(2n)}(\xi)}{(2n)!} \int_a^b w(x)\left[P_n(x)\right]^2 dx

for some ξ(a,b)\xi \in (a,b). Notice the 2n2n-th derivative appears, confirming exactness for polynomials of degree up to 2n12n - 1.

Comparative Analysis

Here's how the error scaling compares for a fixed step size hh or number of nodes:

  • Trapezoidal rule: error O(h2)O(h^2), exact for polynomials up to degree 1
  • Simpson's rule: error O(h4)O(h^4), exact for polynomials up to degree 3
  • nn-point Gaussian quadrature: exact for polynomials up to degree 2n12n - 1

For smooth functions, Gaussian quadrature needs far fewer evaluations to reach a target accuracy. However, the method can struggle with:

  • Singularities within or at the endpoints of the integration interval
  • Rapid oscillations that aren't well-captured by a polynomial approximation
  • Discontinuities in the integrand or its derivatives

When these issues arise, two practical strategies help:

  1. Composite Gaussian quadrature: subdivide the interval into smaller subintervals and apply Gaussian quadrature on each one
  2. Adaptive Gaussian quadrature: automatically refine the subintervals where the integrand is difficult, concentrating effort where it's needed

The Lebesgue constant, which measures how the interpolation error amplifies, grows slowly for Gaussian nodes. This contributes to the method's numerical stability compared to equally spaced interpolation, where the Lebesgue constant grows exponentially (the Runge phenomenon).

Applying Gaussian Quadrature to Integrals

Specific Gaussian Quadrature Types

Gauss-Legendre quadrature is the most commonly used variant. It has weight function w(x)=1w(x) = 1 on [1,1][-1, 1]. To apply it to an arbitrary interval [a,b][a, b], use the linear change of variables:

x=ba2t+b+a2,t[1,1]x = \frac{b - a}{2}t + \frac{b + a}{2}, \quad t \in [-1, 1]

which transforms the integral as:

abf(x)dx=ba211f ⁣(ba2t+b+a2)dt\int_a^b f(x)\,dx = \frac{b-a}{2}\int_{-1}^{1} f\!\left(\frac{b-a}{2}t + \frac{b+a}{2}\right)dt

You then apply the standard Gauss-Legendre nodes and weights to the right-hand side.

Gauss-Chebyshev quadrature uses weight function w(x)=1/1x2w(x) = 1/\sqrt{1 - x^2} on [1,1][-1, 1]. It's well-suited for integrands of the form f(x)/1x2f(x)/\sqrt{1 - x^2}, where ff is smooth. The nodes and weights have closed-form expressions, which is a nice computational advantage. Example: computing 11cos(5x)1x2dx\int_{-1}^{1} \frac{\cos(5x)}{\sqrt{1-x^2}}\,dx.

Gauss-Hermite quadrature uses w(x)=ex2w(x) = e^{-x^2} on (,)(-\infty, \infty). This is natural for problems involving Gaussian-type integrands, such as quantum mechanics expectation values. Example: evaluating x2ex2dx\int_{-\infty}^{\infty} x^2 e^{-x^2}\,dx.

Advanced Techniques and Applications

Gauss-Laguerre quadrature uses w(x)=exw(x) = e^{-x} on [0,)[0, \infty), making it appropriate for integrands with exponential decay. Example: 0xexdx=1\int_{0}^{\infty} x\, e^{-x}\,dx = 1.

Gauss-Jacobi quadrature generalizes several of the above. Its weight function is w(x)=(1x)α(1+x)βw(x) = (1-x)^\alpha(1+x)^\beta on [1,1][-1, 1], with parameters α>1\alpha > -1 and β>1\beta > -1. Legendre corresponds to α=β=0\alpha = \beta = 0, and Chebyshev to α=β=1/2\alpha = \beta = -1/2. Adjusting α\alpha and β\beta lets you handle endpoint singularities of algebraic type directly within the quadrature framework.

Composite Gaussian quadrature subdivides a large interval into smaller subintervals and applies Gaussian quadrature on each. This is useful when:

  • The interval is large and the integrand varies significantly across it
  • The function has localized features (sharp peaks, near-singularities)

For example, to evaluate 010sin(x2)dx\int_{0}^{10} \sin(x^2)\,dx, you'd split [0,10][0, 10] into subintervals and apply Gauss-Legendre on each, summing the results.

Adaptive Gaussian quadrature takes this further by automatically choosing where to refine. It estimates the error on each subinterval and subdivides only where the error is large. This is especially valuable for integrands like 01x1/2dx\int_{0}^{1} x^{-1/2}\,dx, where the singularity at x=0x = 0 demands more nodes nearby but the rest of the interval is smooth.