Taylor series are the bridge between the calculus you've learned—derivatives, integrals, limits—and the powerful idea that any smooth function can be rebuilt from its derivatives at a single point. On the BC exam, you're being tested on whether you understand how polynomials can approximate transcendental functions, why those approximations work, and how to quantify their accuracy. This isn't just abstract theory: Taylor series let you evaluate impossible integrals, solve differential equations, and understand function behavior in ways that direct computation simply can't.
The key concepts you'll encounter include polynomial approximation, convergence and divergence, error bounds, and series manipulation. Don't just memorize the expansions for ex, sinx, and cosx—know why they take the forms they do (alternating signs, factorial denominators, odd vs. even powers). When you understand the underlying structure, you can derive any series on the spot and tackle FRQ questions that ask you to build, manipulate, or analyze Taylor polynomials.
The Foundation: Taylor and Maclaurin Polynomials
Taylor polynomials capture a function's behavior by matching its value and derivatives at a center point. The more derivatives you match, the better your approximation becomes near that point.
Taylor Series Definition
General form: f(x)=∑n=0∞n!f(n)(a)(x−a)n—each term encodes one derivative's contribution
Coefficient formulan!f(n)(a) ensures the polynomial's nth derivative at x=a matches f(n)(a)
Polynomial approximation means replacing complicated functions with sums of powers—the foundation of numerical analysis
Maclaurin Series
Special case where a=0, giving the simpler form f(x)=∑n=0∞n!f(n)(0)xn
Computational advantage—evaluating derivatives at zero often yields clean integer or zero values
Most common expansions (ex, sinx, cosx, ln(1+x)) are Maclaurin series because their centers at zero produce elegant patterns
Compare: Taylor series centered at a vs. Maclaurin series—both use the same coefficient formula n!f(n)(a), but Maclaurin fixes a=0. If an FRQ gives you derivatives at a non-zero point, you're building a Taylor series; if derivatives are at zero, it's Maclaurin.
The Essential Expansions You Must Know
These four series appear constantly on the BC exam. Memorize their forms, but understand their patterns—odd/even powers, alternating signs, and factorial growth.
Exponential Function ex
Expansion: ex=1+x+2!x2+3!x3+⋯=∑n=0∞n!xn
All positive terms—since every derivative of ex equals ex, and e0=1
Converges for all real x (infinite radius of convergence), making it the most well-behaved series
No factorials—coefficients are simply n1, making this series grow more slowly
Limited convergence: only valid for −1<x≤1—a common exam trap
Compare:sinx vs. cosx—both alternate signs and have factorial denominators, but sine uses odd powers (starts with x) while cosine uses even powers (starts with 1). Remember: sine is odd, cosine is even—the function's symmetry determines its series structure.
Error Analysis and the Lagrange Remainder
Understanding error bounds separates students who can use Taylor series from those who truly understand them. The Lagrange remainder tells you exactly how wrong your approximation might be.
Taylor's Theorem
Core guarantee—any sufficiently smooth function equals its Taylor polynomial plus a remainder term
Truncation creates error: stopping at degree n means ignoring infinitely many higher-order terms
Foundation for approximation—the theorem justifies why Taylor polynomials work at all
Lagrange Remainder Form
Formula: Rn(x)=(n+1)!f(n+1)(c)(x−a)n+1 for some c between a and x
The mystery c—you don't know its exact value, so you bound ∣f(n+1)(c)∣ by its maximum on the interval
Error bound strategy: find M such that ∣f(n+1)(c)∣≤M, then ∣Rn(x)∣≤(n+1)!M⋅∣x−a∣n+1
Compare: Taylor's Theorem vs. Mean Value Theorem—both guarantee the existence of some point c in an interval. MVT finds where the derivative equals the average rate of change; Taylor's Theorem uses c to express approximation error. FRQs often ask you to bound error using the Lagrange form—always identify the maximum of ∣f(n+1)∣ on your interval.
Convergence: Where Does the Series Work?
A Taylor series isn't useful if it doesn't converge. The radius of convergence defines the "safe zone" where your series actually represents the function.
Radius of Convergence
Definition: the value R such that the series converges for ∣x−a∣<R and diverges for ∣x−a∣>R
Ratio test is your primary tool: if limn→∞anan+1=L, then R=L1
Three possibilities: R=0 (converges only at center), R=∞ (converges everywhere), or finite R
Interval of Convergence
More than just radius—you must check endpoints separately using other convergence tests
Endpoint behavior varies: ln(1+x) converges at x=1 but diverges at x=−1
Common exam question: find the interval, then determine whether each endpoint is included
Compare:ex (converges for all x) vs. ln(1+x) (converges only for −1<x≤1)—the exponential's derivatives stay bounded relative to factorial growth, while the logarithm's derivatives explode as you move away from the center. Always state the interval of convergence when writing a series.
Building and Manipulating Series
You won't always be handed a series—sometimes you need to construct one or modify a known series. These techniques turn memorized formulas into flexible problem-solving tools.
Finding Coefficients
Coefficient formula: an=n!f(n)(a) requires computing derivatives at the center
Pattern recognition—after finding the first few coefficients, look for a general formula
Derivative tables on FRQs often provide f(a),f′(a),f′′(a),… so you can build the polynomial directly
Series Operations
Addition/subtraction: combine term by term when series have the same center
Multiplication: use the Cauchy product or multiply out first few terms for polynomial approximations
Substitution: replace x with another expression (e.g., substitute −x2 into ex to get e−x2)
Composition and Substitution
Most efficient technique—rather than computing derivatives of e−x2, substitute −x2 into the known series for eu
Convergence changes: substituting x2 for x in ln(1+x) gives ln(1+x2), valid for ∣x∣≤1
Integration/differentiation of series: integrate or differentiate term by term within the interval of convergence
Compare: Computing derivatives directly vs. substitution—for cos(x2), finding f(n)(0) is tedious, but substituting x2 into the cosine series instantly gives 1−2!x4+4!x8−⋯. Use substitution whenever a function is a composition involving a known series.
Power Series Connection
Taylor series are a specific type of power series. Understanding this relationship helps you see Taylor series as part of a larger framework.
Relationship to Power Series
Every Taylor series is a power series of the form ∑n=0∞cn(x−a)n
Not every power series is Taylor—Taylor series have coefficients determined by derivatives; general power series don't require this
Uniqueness theorem: if a function has a power series representation centered at a, it must be the Taylor series
Quick Reference Table
Concept
Best Examples
Maclaurin series (center at 0)
ex, sinx, cosx, ln(1+x)
Infinite radius of convergence
ex, sinx, cosx
Finite radius of convergence
ln(1+x) (R=1), 1−x1 (R=1)
Odd function series (odd powers)
sinx, arctanx
Even function series (even powers)
cosx
Alternating series
sinx, cosx, ln(1+x)
Error bound (Lagrange remainder)
Any truncated Taylor polynomial
Series by substitution
e−x2, cos(x2), ln(1+x2)
Self-Check Questions
Which two series among ex, sinx, cosx, and ln(1+x) share the property of converging for all real numbers? What makes them different from ln(1+x)?
If you're given a table of f(2), f′(2), f′′(2), and f′′′(2), write the formula for the third-degree Taylor polynomial centered at x=2.
Compare and contrast the series for sinx and cosx: How do their terms reflect the odd/even nature of each function?
You need to approximate e0.1 with error less than 0.0001. How would you use the Lagrange remainder to determine how many terms are sufficient?
Explain why substituting −x2 into the series for ex is more efficient than computing derivatives of e−x2 directly. What is the resulting series for e−x2?