Fourier Series sit at the intersection of two major course themes: orthogonal decomposition from linear algebra and solving differential equations with boundary conditions. When you represent a periodic function as a sum of sines and cosines, you're essentially projecting that function onto an orthonormal basis—the same concept you learned with vectors, just extended to infinite-dimensional function spaces. This technique transforms impossible-looking PDEs (heat equation, wave equation) into manageable systems of ODEs.
You're being tested on your ability to compute coefficients, recognize convergence behavior, and apply these series to solve boundary value problems. Don't just memorize the integral formulas—understand that each coefficient captures how much of a particular frequency "lives" in your function, and that orthogonality is what makes the whole decomposition work cleanly.
The Foundation: Series Definition and Coefficients
The core idea is simple: any reasonable periodic function can be written as a (possibly infinite) sum of sines and cosines. The coefficients tell you the "weight" of each frequency component.
Definition of Fourier Series
General form—f(x)=a0+∑n=1∞(ancos(nx)+bnsin(nx)) expresses a periodic function as a superposition of harmonic oscillations
Frequency decomposition allows you to analyze which frequencies are present and how strongly each contributes to the overall shape
Period T determines the fundamental frequency; all other terms are integer multiples (harmonics) of this base frequency
Fourier Coefficients (an and bn)
Coefficient formulas—an=T2∫0Tf(x)cos(T2πnx)dx and bn=T2∫0Tf(x)sin(T2πnx)dx extract each frequency's amplitude
a0 represents the average value of the function over one period (the DC component in engineering language)
Integration over one complete period ensures you capture the full behavior of the function without bias from starting point
Euler's Formula and Complex Form
Euler's formulaeix=cos(x)+isin(x) bridges trigonometric and exponential representations
Complex formf(x)=∑n=−∞∞cneinx combines sine and cosine into a single elegant expression with complex coefficients cn
Computational efficiency—exponentials are often easier to differentiate and integrate, making this form preferred in advanced applications
Compare: Real form vs. Complex form—both represent the same information, but the complex form uses a single coefficient cn instead of separate an and bn. If an FRQ asks you to "simplify" a Fourier calculation, switching to complex exponentials often streamlines the algebra.
The Linear Algebra Connection: Orthogonality
Orthogonality of basis functions is what makes Fourier analysis work. Just as orthogonal vectors simplify projections in Rn, orthogonal functions let you isolate each coefficient independently.
Orthogonality of Trigonometric Functions
Inner product equals zero—∫0Tsin(nx)cos(mx)dx=0 for all integers n,m, and similarly for same-type functions when n=m
Independence of frequency components means changing one coefficient doesn't affect others—each term in the series is "decoupled"
Projection formula for coefficients follows directly: multiply by the basis function and integrate, just like projvu=∣v∣2u⋅v
Parseval's Theorem
Energy conservation—T1∫0T∣f(x)∣2dx=2a02+∑n=1∞(an2+bn2) shows total energy equals sum of energies in each mode
Time-frequency equivalence connects the integral of ∣f∣2 (time domain) to the sum of squared coefficients (frequency domain)
Useful for checking work—if your coefficients don't satisfy Parseval's identity, something went wrong in your calculation
Compare: Orthogonality in Rn vs. function spaces—in both cases, orthogonality lets you compute projections independently. The integral ∫f(x)g(x)dx plays the role of the dot product u⋅v.
Exploiting Symmetry: Even and Odd Functions
Recognizing symmetry cuts your work in half. Even and odd functions have simplified Fourier representations that eliminate entire families of coefficients.
Even and Odd Functions in Fourier Series
Even functions (f(−x)=f(x)) have only cosine terms—all bn=0 because sine is odd and the integral vanishes
Odd functions (f(−x)=−f(x)) have only sine terms—all an=0 because cosine is even and the product integrates to zero
Symmetry detection should be your first step; it reduces computation and helps verify your final answer
Half-Range Expansions
Functions on [0,L] only can be extended to a full period by choosing an even extension (cosine series) or odd extension (sine series)
Boundary value problems often dictate which extension to use—Dirichlet conditions (fixed endpoints) suggest sine series; Neumann conditions (zero derivative) suggest cosine series
Practical technique for heat and wave equations where the physical domain is a finite interval, not a full period
Compare: Cosine series vs. Sine series—both can represent the same function on [0,L], but they extend it differently outside that interval. Choose based on boundary conditions: sine series vanish at endpoints, cosine series have zero slope at endpoints.
Convergence Behavior and Limitations
Not all functions behave equally well under Fourier expansion. Understanding convergence tells you when to trust your series and where to expect trouble.
Convergence of Fourier Series
Pointwise convergence—at points where f is continuous, the series converges to f(x) exactly as you add more terms
At discontinuities, the series converges to the average of left and right limits: 21[f(x−)+f(x+)]
9% overshoot near jump discontinuities persists no matter how many terms you include—this is a fundamental limitation, not a computational error
Oscillations concentrate near the discontinuity but don't spread; the "ringing" gets narrower as n→∞ but never disappears
Exam relevance—if asked why a Fourier approximation looks "wrong" near a jump, Gibbs phenomenon is your answer
Compare: Continuous vs. discontinuous functions—smooth functions have rapidly decaying coefficients (an,bn∼1/n2 or faster), while discontinuous functions decay slowly (∼1/n). This explains why discontinuities require more terms to approximate well.
Applications to Differential Equations
This is where Fourier Series earn their keep. The technique transforms PDEs with periodic or boundary conditions into algebraic problems.
Solving Differential Equations with Fourier Series
Separation of variables combined with Fourier Series converts PDEs (heat, wave, Laplace) into infinite systems of ODEs, each solvable independently
Superposition principle—because the equations are linear, you can solve for each Fourier mode separately and sum the results
Initial/boundary conditions determine your coefficients; the Fourier expansion of the initial condition gives you the weights for each mode
Fourier Transforms and Their Relationship
Non-periodic generalization—Fourier transforms extend the series concept to functions on (−∞,∞) by letting the period T→∞
Continuous spectrum replaces discrete frequencies; instead of coefficients cn, you get a continuous function f^(ω)
Conceptual bridge—Fourier Series are the "discrete" version for periodic functions; transforms handle the general case
Compare: Fourier Series vs. Fourier Transform—series give you discrete frequencies (harmonics of the fundamental), while transforms give a continuous frequency spectrum. Use series for periodic problems, transforms for non-periodic or infinite-domain problems.
Quick Reference Table
Concept
Best Examples
Computing coefficients
an, bn formulas, complex cn
Orthogonality principle
Inner products of sin(nx), cos(mx); Parseval's theorem
Symmetry exploitation
Even/odd functions, half-range expansions
Convergence behavior
Dirichlet conditions, Gibbs phenomenon
Energy relationships
Parseval's theorem
PDE applications
Heat equation, wave equation, boundary value problems
Series vs. Transform
Periodic functions vs. non-periodic/infinite domain
Self-Check Questions
Why does orthogonality of sine and cosine functions allow you to compute each Fourier coefficient independently? How is this analogous to projecting onto orthogonal vectors?
Given a function with a jump discontinuity, what value does the Fourier Series converge to at that point, and what phenomenon affects the approximation nearby?
Compare and contrast: When would you use a half-range cosine expansion versus a half-range sine expansion? What boundary conditions does each satisfy?
If f(x) is an odd function, which coefficients are automatically zero? Explain why using the properties of odd and even functions under integration.
(FRQ-style) A function f(x) on [0,π] satisfies f(0)=f(π)=0. You need to solve the heat equation with these boundary conditions. Would you use a sine series or cosine series expansion? Justify your choice and explain how Fourier's method transforms the PDE into solvable components.