โž—Linear Algebra and Differential Equations

Key Concepts of Fourier Series

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Fourier Series sit at the intersection of two major course themes: orthogonal decomposition from linear algebra and solving differential equations with boundary conditions. When you represent a periodic function as a sum of sines and cosines, you're projecting that function onto an orthonormal basis. It's the same concept you learned with vectors, just extended to infinite-dimensional function spaces. This technique transforms impossible-looking PDEs (heat equation, wave equation) into manageable systems of ODEs.

You're being tested on your ability to compute coefficients, recognize convergence behavior, and apply these series to solve boundary value problems. Don't just memorize the integral formulas. Each coefficient captures how much of a particular frequency "lives" in your function, and orthogonality is what makes the whole decomposition work cleanly.


The Foundation: Series Definition and Coefficients

Any reasonable periodic function can be written as a (possibly infinite) sum of sines and cosines. The coefficients tell you the "weight" of each frequency component.

Definition of Fourier Series

For a function with period TT, the Fourier series takes the general form:

f(x)=a02+โˆ‘n=1โˆž(ancosโกโ€‰โฃ(2ฯ€nxT)+bnsinโกโ€‰โฃ(2ฯ€nxT))f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} \left(a_n \cos\!\left(\frac{2\pi nx}{T}\right) + b_n \sin\!\left(\frac{2\pi nx}{T}\right)\right)

This expresses a periodic function as a superposition of harmonic oscillations. The term a02\frac{a_0}{2} is the average value (the "DC component"), and every other term oscillates at an integer multiple (a harmonic) of the fundamental frequency 2ฯ€T\frac{2\pi}{T}.

Note: Many textbooks work on the interval [โˆ’ฯ€,ฯ€][-\pi, \pi] with T=2ฯ€T = 2\pi, which simplifies the formula to f(x)=a02+โˆ‘n=1โˆž(ancosโก(nx)+bnsinโก(nx))f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty}(a_n \cos(nx) + b_n \sin(nx)). Make sure you know which convention your course uses.

Fourier Coefficients (ana_n and bnb_n)

The coefficient formulas extract each frequency's amplitude by integrating against the corresponding basis function:

an=2Tโˆซ0Tf(x)cosโกโ€‰โฃ(2ฯ€nxT)dx,bn=2Tโˆซ0Tf(x)sinโกโ€‰โฃ(2ฯ€nxT)dxa_n = \frac{2}{T} \int_{0}^{T} f(x) \cos\!\left(\frac{2\pi nx}{T}\right) dx, \qquad b_n = \frac{2}{T} \int_{0}^{T} f(x) \sin\!\left(\frac{2\pi nx}{T}\right) dx

  • a0a_0 deserves special attention: plugging n=0n=0 into the ana_n formula gives a0=2Tโˆซ0Tf(x)โ€‰dxa_0 = \frac{2}{T}\int_0^T f(x)\,dx, which is twice the average value. That's why the series has a02\frac{a_0}{2} out front.
  • Integration over one complete period ensures you capture the full behavior of the function. You can integrate over any interval of length TT (e.g., [โˆ’T/2,T/2][-T/2, T/2]), not just [0,T][0, T].

Euler's Formula and Complex Form

Euler's formula eix=cosโก(x)+isinโก(x)e^{ix} = \cos(x) + i\sin(x) bridges trigonometric and exponential representations. Using it, you can rewrite the Fourier series in complex form:

f(x)=โˆ‘n=โˆ’โˆžโˆžcnโ€‰eiโ‹…2ฯ€nx/Tf(x) = \sum_{n=-\infty}^{\infty} c_n \, e^{i \cdot 2\pi nx/T}

The complex coefficients are cn=1Tโˆซ0Tf(x)โ€‰eโˆ’iโ‹…2ฯ€nx/Tโ€‰dxc_n = \frac{1}{T}\int_0^T f(x)\,e^{-i \cdot 2\pi nx/T}\,dx. Exponentials are often easier to differentiate and integrate, making this form preferred in advanced applications.

Compare: Real form vs. Complex form: both represent the same information, but the complex form uses a single coefficient cnc_n instead of separate ana_n and bnb_n. The relationship is cn=12(anโˆ’ibn)c_n = \frac{1}{2}(a_n - ib_n) for n>0n > 0 and cโˆ’n=cnโ€พc_{-n} = \overline{c_n} when ff is real-valued. If a problem asks you to "simplify" a Fourier calculation, switching to complex exponentials often streamlines the algebra.


The Linear Algebra Connection: Orthogonality

Orthogonality of basis functions is what makes Fourier analysis work. Just as orthogonal vectors simplify projections in Rn\mathbb{R}^n, orthogonal functions let you isolate each coefficient independently.

Orthogonality of Trigonometric Functions

The key orthogonality relations on [0,T][0, T] (or any interval of length TT) are:

  • โˆซ0Tsinโกโ€‰โฃ(2ฯ€nxT)cosโกโ€‰โฃ(2ฯ€mxT)dx=0\int_{0}^{T} \sin\!\left(\frac{2\pi nx}{T}\right) \cos\!\left(\frac{2\pi mx}{T}\right) dx = 0 for all integers n,mn, m
  • โˆซ0Tcosโกโ€‰โฃ(2ฯ€nxT)cosโกโ€‰โฃ(2ฯ€mxT)dx=0\int_{0}^{T} \cos\!\left(\frac{2\pi nx}{T}\right) \cos\!\left(\frac{2\pi mx}{T}\right) dx = 0 when nโ‰ mn \neq m
  • โˆซ0Tsinโกโ€‰โฃ(2ฯ€nxT)sinโกโ€‰โฃ(2ฯ€mxT)dx=0\int_{0}^{T} \sin\!\left(\frac{2\pi nx}{T}\right) \sin\!\left(\frac{2\pi mx}{T}\right) dx = 0 when nโ‰ mn \neq m

This independence of frequency components means changing one coefficient doesn't affect others. Each term in the series is decoupled.

The projection formula for coefficients follows directly: multiply both sides of the Fourier series by a basis function and integrate. Orthogonality kills every term except the one you want, just like projvโƒ—uโƒ—=uโƒ—โ‹…vโƒ—vโƒ—โ‹…vโƒ—โ€‰vโƒ—\text{proj}_{\vec{v}} \vec{u} = \frac{\vec{u} \cdot \vec{v}}{\vec{v} \cdot \vec{v}}\,\vec{v} in Rn\mathbb{R}^n.

Parseval's Theorem

Parseval's theorem is an energy conservation statement:

1Tโˆซ0Tโˆฃf(x)โˆฃ2โ€‰dx=a024+12โˆ‘n=1โˆž(an2+bn2)\frac{1}{T} \int_{0}^{T} |f(x)|^2 \, dx = \frac{a_0^2}{4} + \frac{1}{2}\sum_{n=1}^{\infty} (a_n^2 + b_n^2)

The left side is the average of โˆฃfโˆฃ2|f|^2 over one period (total "energy" in the time domain). The right side is the sum of energies in each frequency mode. This is useful for checking work: if your coefficients don't satisfy Parseval's identity, something went wrong in your calculation.

Compare: Orthogonality in Rn\mathbb{R}^n vs. function spaces: in both cases, orthogonality lets you compute projections independently. The integral โˆซf(x)g(x)โ€‰dx\int f(x)g(x)\,dx plays the role of the dot product uโƒ—โ‹…vโƒ—\vec{u} \cdot \vec{v}.


Exploiting Symmetry: Even and Odd Functions

Recognizing symmetry cuts your work in half. Even and odd functions have simplified Fourier representations that eliminate entire families of coefficients.

Even and Odd Functions in Fourier Series

  • Even functions (f(โˆ’x)=f(x)f(-x) = f(x)) have only cosine terms: all bn=0b_n = 0. Why? The product of an even function with sinโก(nx)\sin(nx) (which is odd) is odd, and odd functions integrate to zero over symmetric intervals.
  • Odd functions (f(โˆ’x)=โˆ’f(x)f(-x) = -f(x)) have only sine terms: all an=0a_n = 0. The product of an odd function with cosโก(nx)\cos(nx) (which is even) is odd, so again the integral vanishes.
  • Symmetry detection should be your first step before computing anything. It reduces computation and helps verify your final answer.

Half-Range Expansions

When your function is defined only on [0,L][0, L] (not a full period), you can extend it to create a periodic function in two ways:

  • Even extension (reflect across the y-axis) produces a cosine series
  • Odd extension (reflect with a sign flip) produces a sine series

Boundary value problems often dictate which extension to use. Dirichlet conditions (fixed values at endpoints, like f(0)=f(L)=0f(0) = f(L) = 0) call for a sine series because sinโก(nฯ€x/L)\sin(n\pi x/L) vanishes at both endpoints. Neumann conditions (zero derivative at endpoints) call for a cosine series because the derivative of cosโก(nฯ€x/L)\cos(n\pi x/L) vanishes at both endpoints.

Compare: Cosine series vs. Sine series: both can represent the same function on [0,L][0, L], but they extend it differently outside that interval. Choose based on boundary conditions: sine series vanish at endpoints, cosine series have zero slope at endpoints.


Convergence Behavior and Limitations

Not all functions behave equally well under Fourier expansion. Understanding convergence tells you when to trust your series and where to expect trouble.

Convergence of Fourier Series

  • Pointwise convergence: at points where ff is continuous, the series converges to f(x)f(x) exactly as you add more terms
  • At discontinuities, the series converges to the average of left and right limits: 12[f(xโˆ’)+f(x+)]\frac{1}{2}[f(x^-) + f(x^+)]
  • Dirichlet conditions guarantee convergence. A function satisfies these if it's piecewise continuous, has finitely many discontinuities per period, and has bounded variation. Most functions you'll encounter in this course qualify.

Gibbs Phenomenon

Near a jump discontinuity, the partial sums of the Fourier series overshoot by about 9% of the jump size. This overshoot persists no matter how many terms you include. As nโ†’โˆžn \to \infty, the oscillations get narrower and concentrate closer to the discontinuity, but the peak overshoot never goes away.

If an exam asks why a Fourier approximation looks "wrong" near a jump, Gibbs phenomenon is your answer. It's a fundamental limitation of representing discontinuities with continuous sinusoidal functions, not a computational error.

Compare: Continuous vs. discontinuous functions: smooth functions have rapidly decaying coefficients (an,bnโˆผ1/n2a_n, b_n \sim 1/n^2 or faster), while functions with jump discontinuities decay slowly (โˆผ1/n\sim 1/n). This is why discontinuities require many more terms to approximate well.


Applications to Differential Equations

This is where Fourier Series earn their keep. The technique transforms PDEs with periodic or boundary conditions into algebraic problems.

Solving Differential Equations with Fourier Series

The standard approach combines separation of variables with Fourier Series. Here's how it works for a PDE like the heat equation ut=kโ€‰uxxu_t = k\,u_{xx}:

  1. Separate variables: assume u(x,t)=X(x)โ€‰T(t)u(x,t) = X(x)\,T(t) and substitute into the PDE. This splits the PDE into two ODEs, one in xx and one in tt.
  2. Solve the spatial ODE with the given boundary conditions. This produces eigenfunctions (typically sines or cosines) and eigenvalues.
  3. Write the general solution as a superposition: u(x,t)=โˆ‘n=1โˆžBnsinโกโ€‰โฃ(nฯ€xL)eโˆ’k(nฯ€/L)2tu(x,t) = \sum_{n=1}^{\infty} B_n \sin\!\left(\frac{n\pi x}{L}\right) e^{-k(n\pi/L)^2 t} (for Dirichlet boundary conditions).
  4. Apply the initial condition u(x,0)=f(x)u(x,0) = f(x). This gives f(x)=โˆ‘Bnsinโก(nฯ€x/L)f(x) = \sum B_n \sin(n\pi x/L), which is exactly a Fourier sine series. Compute the BnB_n using the coefficient formula.

The superposition principle is what makes this work: because the PDE is linear, you can solve for each Fourier mode separately and sum the results.

Fourier Transforms and Their Relationship

  • Non-periodic generalization: Fourier transforms extend the series concept to functions on (โˆ’โˆž,โˆž)(-\infty, \infty) by letting the period Tโ†’โˆžT \to \infty
  • Continuous spectrum replaces discrete frequencies; instead of coefficients cnc_n, you get a continuous function f^(ฯ‰)\hat{f}(\omega)

Compare: Fourier Series vs. Fourier Transform: series give you discrete frequencies (harmonics of the fundamental), while transforms give a continuous frequency spectrum. Use series for periodic problems or finite-domain boundary value problems; use transforms for non-periodic or infinite-domain problems.


Quick Reference Table

ConceptKey Details
Computing coefficientsana_n, bnb_n formulas; complex cnc_n; remember a02\frac{a_0}{2} in the series
Orthogonality principleInner products of sinโก(nx)\sin(nx), cosโก(mx)\cos(mx) vanish; enables independent coefficient computation
Symmetry exploitationEven โ†’\to cosine series only; Odd โ†’\to sine series only; half-range expansions
Convergence behaviorDirichlet conditions guarantee convergence; averages left/right limits at jumps
Gibbs phenomenon~9% overshoot at discontinuities; doesn't vanish with more terms
Energy relationshipsParseval's theorem: total energy = sum of modal energies
PDE applicationsSeparation of variables + Fourier coefficients from initial/boundary conditions
Series vs. TransformPeriodic/finite domain โ†’\to series; non-periodic/infinite domain โ†’\to transform

Self-Check Questions

  1. Why does orthogonality of sine and cosine functions allow you to compute each Fourier coefficient independently? How is this analogous to projecting onto orthogonal vectors?

  2. Given a function with a jump discontinuity, what value does the Fourier series converge to at that point, and what phenomenon affects the approximation nearby?

  3. When would you use a half-range cosine expansion versus a half-range sine expansion? What boundary conditions does each naturally satisfy?

  4. If f(x)f(x) is an odd function, which coefficients are automatically zero? Explain why using the properties of odd and even functions under integration.

  5. A function f(x)f(x) on [0,ฯ€][0, \pi] satisfies f(0)=f(ฯ€)=0f(0) = f(\pi) = 0. You need to solve the heat equation with these boundary conditions. Would you use a sine series or cosine series expansion? Justify your choice and outline how Fourier's method transforms the PDE into solvable components.