Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Fourier Series sit at the intersection of two major course themes: orthogonal decomposition from linear algebra and solving differential equations with boundary conditions. When you represent a periodic function as a sum of sines and cosines, you're projecting that function onto an orthonormal basis. It's the same concept you learned with vectors, just extended to infinite-dimensional function spaces. This technique transforms impossible-looking PDEs (heat equation, wave equation) into manageable systems of ODEs.
You're being tested on your ability to compute coefficients, recognize convergence behavior, and apply these series to solve boundary value problems. Don't just memorize the integral formulas. Each coefficient captures how much of a particular frequency "lives" in your function, and orthogonality is what makes the whole decomposition work cleanly.
Any reasonable periodic function can be written as a (possibly infinite) sum of sines and cosines. The coefficients tell you the "weight" of each frequency component.
For a function with period , the Fourier series takes the general form:
This expresses a periodic function as a superposition of harmonic oscillations. The term is the average value (the "DC component"), and every other term oscillates at an integer multiple (a harmonic) of the fundamental frequency .
Note: Many textbooks work on the interval with , which simplifies the formula to . Make sure you know which convention your course uses.
The coefficient formulas extract each frequency's amplitude by integrating against the corresponding basis function:
Euler's formula bridges trigonometric and exponential representations. Using it, you can rewrite the Fourier series in complex form:
The complex coefficients are . Exponentials are often easier to differentiate and integrate, making this form preferred in advanced applications.
Compare: Real form vs. Complex form: both represent the same information, but the complex form uses a single coefficient instead of separate and . The relationship is for and when is real-valued. If a problem asks you to "simplify" a Fourier calculation, switching to complex exponentials often streamlines the algebra.
Orthogonality of basis functions is what makes Fourier analysis work. Just as orthogonal vectors simplify projections in , orthogonal functions let you isolate each coefficient independently.
The key orthogonality relations on (or any interval of length ) are:
This independence of frequency components means changing one coefficient doesn't affect others. Each term in the series is decoupled.
The projection formula for coefficients follows directly: multiply both sides of the Fourier series by a basis function and integrate. Orthogonality kills every term except the one you want, just like in .
Parseval's theorem is an energy conservation statement:
The left side is the average of over one period (total "energy" in the time domain). The right side is the sum of energies in each frequency mode. This is useful for checking work: if your coefficients don't satisfy Parseval's identity, something went wrong in your calculation.
Compare: Orthogonality in vs. function spaces: in both cases, orthogonality lets you compute projections independently. The integral plays the role of the dot product .
Recognizing symmetry cuts your work in half. Even and odd functions have simplified Fourier representations that eliminate entire families of coefficients.
When your function is defined only on (not a full period), you can extend it to create a periodic function in two ways:
Boundary value problems often dictate which extension to use. Dirichlet conditions (fixed values at endpoints, like ) call for a sine series because vanishes at both endpoints. Neumann conditions (zero derivative at endpoints) call for a cosine series because the derivative of vanishes at both endpoints.
Compare: Cosine series vs. Sine series: both can represent the same function on , but they extend it differently outside that interval. Choose based on boundary conditions: sine series vanish at endpoints, cosine series have zero slope at endpoints.
Not all functions behave equally well under Fourier expansion. Understanding convergence tells you when to trust your series and where to expect trouble.
Near a jump discontinuity, the partial sums of the Fourier series overshoot by about 9% of the jump size. This overshoot persists no matter how many terms you include. As , the oscillations get narrower and concentrate closer to the discontinuity, but the peak overshoot never goes away.
If an exam asks why a Fourier approximation looks "wrong" near a jump, Gibbs phenomenon is your answer. It's a fundamental limitation of representing discontinuities with continuous sinusoidal functions, not a computational error.
Compare: Continuous vs. discontinuous functions: smooth functions have rapidly decaying coefficients ( or faster), while functions with jump discontinuities decay slowly (). This is why discontinuities require many more terms to approximate well.
This is where Fourier Series earn their keep. The technique transforms PDEs with periodic or boundary conditions into algebraic problems.
The standard approach combines separation of variables with Fourier Series. Here's how it works for a PDE like the heat equation :
The superposition principle is what makes this work: because the PDE is linear, you can solve for each Fourier mode separately and sum the results.
Compare: Fourier Series vs. Fourier Transform: series give you discrete frequencies (harmonics of the fundamental), while transforms give a continuous frequency spectrum. Use series for periodic problems or finite-domain boundary value problems; use transforms for non-periodic or infinite-domain problems.
| Concept | Key Details |
|---|---|
| Computing coefficients | , formulas; complex ; remember in the series |
| Orthogonality principle | Inner products of , vanish; enables independent coefficient computation |
| Symmetry exploitation | Even cosine series only; Odd sine series only; half-range expansions |
| Convergence behavior | Dirichlet conditions guarantee convergence; averages left/right limits at jumps |
| Gibbs phenomenon | ~9% overshoot at discontinuities; doesn't vanish with more terms |
| Energy relationships | Parseval's theorem: total energy = sum of modal energies |
| PDE applications | Separation of variables + Fourier coefficients from initial/boundary conditions |
| Series vs. Transform | Periodic/finite domain series; non-periodic/infinite domain transform |
Why does orthogonality of sine and cosine functions allow you to compute each Fourier coefficient independently? How is this analogous to projecting onto orthogonal vectors?
Given a function with a jump discontinuity, what value does the Fourier series converge to at that point, and what phenomenon affects the approximation nearby?
When would you use a half-range cosine expansion versus a half-range sine expansion? What boundary conditions does each naturally satisfy?
If is an odd function, which coefficients are automatically zero? Explain why using the properties of odd and even functions under integration.
A function on satisfies . You need to solve the heat equation with these boundary conditions. Would you use a sine series or cosine series expansion? Justify your choice and outline how Fourier's method transforms the PDE into solvable components.