Taylor series are one of the most powerful tools you'll encounter in Calculus II because they let you transform complicated functions into infinite polynomials—and polynomials are functions we actually know how to work with. You're being tested on your ability to construct series from derivatives, determine where series converge, and use series to approximate values with known error bounds. These skills connect directly to limits, derivatives, and integrals you've already mastered, while opening doors to numerical methods used in physics, engineering, and computer science.
The key insight is that Taylor series aren't just about memorizing formulas—they're about understanding why a function can be represented as an infinite sum and how accurately that representation works. When you see Taylor series on an exam, you'll need to recognize which standard series to use, manipulate series through differentiation and integration, and bound your approximation errors. Don't just memorize the series for ex or sinx—know how they're built and where they work.
Foundations: Building the Series
Before you can use Taylor series effectively, you need to understand what they are and how they're constructed. The core idea is that any sufficiently smooth function can be expressed as an infinite sum of polynomial terms, where each term captures information from a successive derivative.
Definition of Taylor Series
General form centered at a—f(x)=∑n=0∞n!f(n)(a)(x−a)n, where each term uses the nth derivative evaluated at the center point
Approximation near the center—the series matches the function's value, slope, concavity, and all higher-order behavior at x=a
Convergence requirement—the series only equals the function where it converges, which may be a limited interval or the entire real line
Maclaurin Series
Special case where a=0—simplifies to f(x)=∑n=0∞n!f(n)(0)xn, eliminating the (x−a) terms
Preferred when f(0) is easy to compute—most standard series (ex, sinx, cosx) are Maclaurin series because their derivatives at zero follow clean patterns
Same convergence principles apply—being centered at zero doesn't guarantee convergence everywhere
Taylor's Theorem
Remainder term quantifies error—Rn(x)=(n+1)!f(n+1)(c)(x−a)n+1 for some c between a and x
Lagrange form is most common—this specific remainder formula appears frequently on exams for error estimation problems
Connects finite approximation to infinite series—the theorem guarantees that if Rn(x)→0 as n→∞, the Taylor series converges to f(x)
Compare: Taylor Series vs. Maclaurin Series—both use the same construction principle, but Maclaurin is centered at a=0 while Taylor can be centered anywhere. If an FRQ asks you to approximate ln(1.1), a Maclaurin series for ln(1+x) works perfectly; for ln(4.9), you'd want a Taylor series centered at a=5.
Convergence: Where Does the Series Work?
A Taylor series is useless if you don't know where it actually represents the function. Convergence analysis tells you the set of x-values where the infinite sum equals the original function.
Radius of Convergence
Defines the "reach" of convergence—if R is the radius, the series converges for ∣x−a∣<R and diverges for ∣x−a∣>R
Found using the Ratio Test—compute limn→∞anan+1 and set it less than 1 to solve for valid x-values
Three possibilities—R=0 (converges only at center), R=∞ (converges everywhere), or 0<R<∞ (finite interval)
Interval of Convergence
Extends radius to include endpoint behavior—the interval might be (a−R,a+R), [a−R,a+R), (a−R,a+R], or [a−R,a+R]
Endpoints require separate testing—plug each endpoint into the series and use tests for convergence (often alternating series or p-series tests)
Critical exam skill—many problems specifically ask whether endpoints are included, so never skip this step
Compare: Radius vs. Interval of Convergence—the radius tells you the distance from center where convergence is guaranteed, while the interval specifies the exact set including or excluding endpoints. The series for ln(1+x) has R=1, but its interval is (−1,1] because it converges at x=1 but not at x=−1.
Standard Series: Your Toolkit
Memorizing these common series saves enormous time on exams. Each series below is a Maclaurin series (centered at 0) derived from the function's derivative pattern.
Exponential Function ex
Series: ex=∑n=0∞n!xn=1+x+2!x2+3!x3+⋯—all coefficients are positive because all derivatives of ex equal ex, and e0=1
Converges for all real x—radius of convergence is infinite, making this series universally applicable
Foundation for other series—substituting −x, x2, or other expressions generates series for e−x, ex2, etc.
Sine Function sinx
Series: sinx=∑n=0∞(2n+1)!(−1)nx2n+1=x−3!x3+5!x5−⋯—only odd powers appear because sin(0)=0 and even derivatives at zero vanish
Alternating signs from derivative cycle—the pattern sin,cos,−sin,−cos creates the (−1)n factor
Converges for all real x—infinite radius means you can approximate sinx anywhere with enough terms
Cosine Function cosx
Series: cosx=∑n=0∞(2n)!(−1)nx2n=1−2!x2+4!x4−⋯—only even powers appear because cos(0)=1 and odd derivatives at zero vanish
Related to sine by differentiation—taking the derivative of the sinx series term-by-term yields the cosx series
Converges for all real x—same infinite radius as sinx and ex
Compare:sinx vs. cosx Series—both alternate in sign and converge everywhere, but sinx uses odd powers (starting with x) while cosx uses even powers (starting with 1). On an FRQ, if you forget one, differentiate or integrate the other!
Working with Series: Operations and Error
Once you have a Taylor series, you can manipulate it and quantify how well finite approximations work. These techniques transform series from theoretical objects into practical computational tools.
Differentiation and Integration of Series
Term-by-term operations are valid inside the interval of convergence—differentiate or integrate each term of ∑an(x−a)n as if it were a polynomial
Differentiation: dxd[∑n=0∞anxn]=∑n=1∞n⋅anxn−1—the radius of convergence stays the same (though endpoints may change)
Integration: ∫∑n=0∞anxndx=C+∑n=0∞n+1anxn+1—useful for finding series of functions like ln(1+x) or arctanx
Error Bounds and Estimation
Lagrange error bound: ∣Rn(x)∣≤(n+1)!M∣x−a∣n+1—where M is the maximum of ∣f(n+1)∣ on the interval between a and x
Alternating series error bound—for alternating series, the error is bounded by the absolute value of the first omitted term
Determines required terms for accuracy—exam problems often ask "how many terms guarantee error less than 0.001?"
Power Series Representation
General form: ∑n=0∞an(x−a)n—Taylor series are power series where an=n!f(n)(a)
Uniqueness theorem—if a function has a power series representation on an interval, that series must be its Taylor series
Enables series manipulation—multiply, divide, compose, or substitute into known series to generate new ones
Compare: Lagrange Error vs. Alternating Series Error—Lagrange works for any Taylor series but requires finding a bound M on the next derivative, while the alternating series bound is simpler but only applies when terms alternate in sign. For cosx approximations, the alternating bound is usually easier; for ex, you'll need Lagrange.
Applications: Why This All Matters
Taylor series aren't just theoretical—they're essential tools for solving real problems. These applications demonstrate why series approximations are fundamental to science and engineering.
Applications of Taylor Series
Numerical approximation—calculators and computers use Taylor polynomials to evaluate functions like sinx, ex, and lnx to arbitrary precision
Solving differential equations—when closed-form solutions don't exist, series solutions provide answers as power series
Limit evaluation—replacing functions with their series often simplifies indeterminate forms like 00 without L'Hôpital's Rule
Compare: Taylor Series vs. L'Hôpital's Rule for Limits—both handle indeterminate forms, but series substitution often resolves limits in one step that would require multiple L'Hôpital applications. For limx→0x3sinx−x, substituting the series immediately shows the answer is −61.
Quick Reference Table
Concept
Best Examples
Series centered at a=0
Maclaurin series for ex, sinx, cosx
Infinite radius of convergence
ex, sinx, cosx
Finite radius of convergence
ln(1+x) with R=1, 1−x1 with R=1
Alternating series
sinx, cosx, ln(1+x)
All positive terms
ex, 1−x1 for x>0
Odd powers only
sinx, arctanx
Even powers only
cosx
Error estimation
Lagrange remainder, alternating series bound
Self-Check Questions
What distinguishes a Maclaurin series from a general Taylor series, and when would you choose one over the other for approximating e?
The series for ex and cosx both converge for all real numbers, yet one has all positive terms while the other alternates. How does this affect which error bound method you'd use for each?
If you know the Taylor series for 1−x1, how would you find the series for (1−x)21 without computing derivatives directly?
Compare and contrast the interval of convergence for ln(1+x) and 1+x1. Why do they have the same radius but potentially different endpoint behavior?
An FRQ asks you to approximate ∫00.5e−x2dx with error less than 0.001. Outline the steps you would take, including which series you'd use and how you'd bound the error.