upgrade
upgrade

Calculus II

Key Concepts of Taylor Series

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Taylor series are one of the most powerful tools you'll encounter in Calculus II because they let you transform complicated functions into infinite polynomials—and polynomials are functions we actually know how to work with. You're being tested on your ability to construct series from derivatives, determine where series converge, and use series to approximate values with known error bounds. These skills connect directly to limits, derivatives, and integrals you've already mastered, while opening doors to numerical methods used in physics, engineering, and computer science.

The key insight is that Taylor series aren't just about memorizing formulas—they're about understanding why a function can be represented as an infinite sum and how accurately that representation works. When you see Taylor series on an exam, you'll need to recognize which standard series to use, manipulate series through differentiation and integration, and bound your approximation errors. Don't just memorize the series for exe^x or sinx\sin x—know how they're built and where they work.


Foundations: Building the Series

Before you can use Taylor series effectively, you need to understand what they are and how they're constructed. The core idea is that any sufficiently smooth function can be expressed as an infinite sum of polynomial terms, where each term captures information from a successive derivative.

Definition of Taylor Series

  • General form centered at aaf(x)=n=0f(n)(a)n!(xa)nf(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n, where each term uses the nnth derivative evaluated at the center point
  • Approximation near the center—the series matches the function's value, slope, concavity, and all higher-order behavior at x=ax = a
  • Convergence requirement—the series only equals the function where it converges, which may be a limited interval or the entire real line

Maclaurin Series

  • Special case where a=0a = 0—simplifies to f(x)=n=0f(n)(0)n!xnf(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(0)}{n!}x^n, eliminating the (xa)(x-a) terms
  • Preferred when f(0)f(0) is easy to compute—most standard series (exe^x, sinx\sin x, cosx\cos x) are Maclaurin series because their derivatives at zero follow clean patterns
  • Same convergence principles apply—being centered at zero doesn't guarantee convergence everywhere

Taylor's Theorem

  • Remainder term quantifies errorRn(x)=f(n+1)(c)(n+1)!(xa)n+1R_n(x) = \frac{f^{(n+1)}(c)}{(n+1)!}(x-a)^{n+1} for some cc between aa and xx
  • Lagrange form is most common—this specific remainder formula appears frequently on exams for error estimation problems
  • Connects finite approximation to infinite series—the theorem guarantees that if Rn(x)0R_n(x) \to 0 as nn \to \infty, the Taylor series converges to f(x)f(x)

Compare: Taylor Series vs. Maclaurin Series—both use the same construction principle, but Maclaurin is centered at a=0a = 0 while Taylor can be centered anywhere. If an FRQ asks you to approximate ln(1.1)\ln(1.1), a Maclaurin series for ln(1+x)\ln(1+x) works perfectly; for ln(4.9)\ln(4.9), you'd want a Taylor series centered at a=5a = 5.


Convergence: Where Does the Series Work?

A Taylor series is useless if you don't know where it actually represents the function. Convergence analysis tells you the set of xx-values where the infinite sum equals the original function.

Radius of Convergence

  • Defines the "reach" of convergence—if RR is the radius, the series converges for xa<R|x-a| < R and diverges for xa>R|x-a| > R
  • Found using the Ratio Test—compute limnan+1an\lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| and set it less than 1 to solve for valid xx-values
  • Three possibilitiesR=0R = 0 (converges only at center), R=R = \infty (converges everywhere), or 0<R<0 < R < \infty (finite interval)

Interval of Convergence

  • Extends radius to include endpoint behavior—the interval might be (aR,a+R)(a-R, a+R), [aR,a+R)[a-R, a+R), (aR,a+R](a-R, a+R], or [aR,a+R][a-R, a+R]
  • Endpoints require separate testing—plug each endpoint into the series and use tests for convergence (often alternating series or p-series tests)
  • Critical exam skill—many problems specifically ask whether endpoints are included, so never skip this step

Compare: Radius vs. Interval of Convergence—the radius tells you the distance from center where convergence is guaranteed, while the interval specifies the exact set including or excluding endpoints. The series for ln(1+x)\ln(1+x) has R=1R = 1, but its interval is (1,1](-1, 1] because it converges at x=1x = 1 but not at x=1x = -1.


Standard Series: Your Toolkit

Memorizing these common series saves enormous time on exams. Each series below is a Maclaurin series (centered at 0) derived from the function's derivative pattern.

Exponential Function exe^x

  • Series: ex=n=0xnn!=1+x+x22!+x33!+e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots—all coefficients are positive because all derivatives of exe^x equal exe^x, and e0=1e^0 = 1
  • Converges for all real xx—radius of convergence is infinite, making this series universally applicable
  • Foundation for other series—substituting x-x, x2x^2, or other expressions generates series for exe^{-x}, ex2e^{x^2}, etc.

Sine Function sinx\sin x

  • Series: sinx=n=0(1)nx2n+1(2n+1)!=xx33!+x55!\sin x = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}}{(2n+1)!} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots—only odd powers appear because sin(0)=0\sin(0) = 0 and even derivatives at zero vanish
  • Alternating signs from derivative cycle—the pattern sin,cos,sin,cos\sin, \cos, -\sin, -\cos creates the (1)n(-1)^n factor
  • Converges for all real xx—infinite radius means you can approximate sinx\sin x anywhere with enough terms

Cosine Function cosx\cos x

  • Series: cosx=n=0(1)nx2n(2n)!=1x22!+x44!\cos x = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n}}{(2n)!} = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \cdots—only even powers appear because cos(0)=1\cos(0) = 1 and odd derivatives at zero vanish
  • Related to sine by differentiation—taking the derivative of the sinx\sin x series term-by-term yields the cosx\cos x series
  • Converges for all real xx—same infinite radius as sinx\sin x and exe^x

Compare: sinx\sin x vs. cosx\cos x Series—both alternate in sign and converge everywhere, but sinx\sin x uses odd powers (starting with xx) while cosx\cos x uses even powers (starting with 1). On an FRQ, if you forget one, differentiate or integrate the other!


Working with Series: Operations and Error

Once you have a Taylor series, you can manipulate it and quantify how well finite approximations work. These techniques transform series from theoretical objects into practical computational tools.

Differentiation and Integration of Series

  • Term-by-term operations are valid inside the interval of convergence—differentiate or integrate each term of an(xa)n\sum a_n(x-a)^n as if it were a polynomial
  • Differentiation: ddx[n=0anxn]=n=1nanxn1\frac{d}{dx}\left[\sum_{n=0}^{\infty} a_n x^n\right] = \sum_{n=1}^{\infty} n \cdot a_n x^{n-1}—the radius of convergence stays the same (though endpoints may change)
  • Integration: n=0anxndx=C+n=0anxn+1n+1\int \sum_{n=0}^{\infty} a_n x^n \, dx = C + \sum_{n=0}^{\infty} \frac{a_n x^{n+1}}{n+1}—useful for finding series of functions like ln(1+x)\ln(1+x) or arctanx\arctan x

Error Bounds and Estimation

  • Lagrange error bound: Rn(x)Mxan+1(n+1)!|R_n(x)| \leq \frac{M|x-a|^{n+1}}{(n+1)!}—where MM is the maximum of f(n+1)|f^{(n+1)}| on the interval between aa and xx
  • Alternating series error bound—for alternating series, the error is bounded by the absolute value of the first omitted term
  • Determines required terms for accuracy—exam problems often ask "how many terms guarantee error less than 0.001?"

Power Series Representation

  • General form: n=0an(xa)n\sum_{n=0}^{\infty} a_n(x-a)^n—Taylor series are power series where an=f(n)(a)n!a_n = \frac{f^{(n)}(a)}{n!}
  • Uniqueness theorem—if a function has a power series representation on an interval, that series must be its Taylor series
  • Enables series manipulation—multiply, divide, compose, or substitute into known series to generate new ones

Compare: Lagrange Error vs. Alternating Series Error—Lagrange works for any Taylor series but requires finding a bound MM on the next derivative, while the alternating series bound is simpler but only applies when terms alternate in sign. For cosx\cos x approximations, the alternating bound is usually easier; for exe^x, you'll need Lagrange.


Applications: Why This All Matters

Taylor series aren't just theoretical—they're essential tools for solving real problems. These applications demonstrate why series approximations are fundamental to science and engineering.

Applications of Taylor Series

  • Numerical approximation—calculators and computers use Taylor polynomials to evaluate functions like sinx\sin x, exe^x, and lnx\ln x to arbitrary precision
  • Solving differential equations—when closed-form solutions don't exist, series solutions provide answers as power series
  • Limit evaluation—replacing functions with their series often simplifies indeterminate forms like 00\frac{0}{0} without L'Hôpital's Rule

Compare: Taylor Series vs. L'Hôpital's Rule for Limits—both handle indeterminate forms, but series substitution often resolves limits in one step that would require multiple L'Hôpital applications. For limx0sinxxx3\lim_{x \to 0} \frac{\sin x - x}{x^3}, substituting the series immediately shows the answer is 16-\frac{1}{6}.


Quick Reference Table

ConceptBest Examples
Series centered at a=0a = 0Maclaurin series for exe^x, sinx\sin x, cosx\cos x
Infinite radius of convergenceexe^x, sinx\sin x, cosx\cos x
Finite radius of convergenceln(1+x)\ln(1+x) with R=1R=1, 11x\frac{1}{1-x} with R=1R=1
Alternating seriessinx\sin x, cosx\cos x, ln(1+x)\ln(1+x)
All positive termsexe^x, 11x\frac{1}{1-x} for x>0x > 0
Odd powers onlysinx\sin x, arctanx\arctan x
Even powers onlycosx\cos x
Error estimationLagrange remainder, alternating series bound

Self-Check Questions

  1. What distinguishes a Maclaurin series from a general Taylor series, and when would you choose one over the other for approximating e\sqrt{e}?

  2. The series for exe^x and cosx\cos x both converge for all real numbers, yet one has all positive terms while the other alternates. How does this affect which error bound method you'd use for each?

  3. If you know the Taylor series for 11x\frac{1}{1-x}, how would you find the series for 1(1x)2\frac{1}{(1-x)^2} without computing derivatives directly?

  4. Compare and contrast the interval of convergence for ln(1+x)\ln(1+x) and 11+x\frac{1}{1+x}. Why do they have the same radius but potentially different endpoint behavior?

  5. An FRQ asks you to approximate 00.5ex2dx\int_0^{0.5} e^{-x^2} dx with error less than 0.001. Outline the steps you would take, including which series you'd use and how you'd bound the error.