Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Taylor series let you transform complicated functions into infinite polynomials, and polynomials are functions you actually know how to work with. In Calculus II, you're expected to construct series from derivatives, determine where series converge, and use series to approximate values with known error bounds. These skills build directly on limits, derivatives, and integrals while opening doors to numerical methods used in physics, engineering, and computer science.
Taylor series aren't just about memorizing formulas. They're about understanding why a function can be represented as an infinite sum and how accurately that representation works. On exams, you'll need to recognize which standard series to use, manipulate series through differentiation and integration, and bound your approximation errors. Don't just memorize the series for or . Know how they're built and where they work.
Before you can use Taylor series effectively, you need to understand what they are and how they're constructed. The core idea: any sufficiently smooth function can be expressed as an infinite sum of polynomial terms, where each term captures information from a successive derivative.
The general Taylor series centered at is:
Each term uses the th derivative of evaluated at the center point , divided by , and multiplied by . The series is designed so that at , it matches the function's value, slope, concavity, and all higher-order behavior exactly.
One critical detail: the series only equals the function where it converges. That might be a limited interval around , or it might be the entire real line, depending on the function.
A Maclaurin series is just a Taylor series centered at :
This eliminates the terms, which simplifies the algebra. Most of the standard series you'll memorize (, , ) are Maclaurin series because their derivatives at zero follow clean patterns. Being centered at zero doesn't guarantee convergence everywhere, though. The same convergence principles apply.
Taylor's Theorem gives you a way to quantify the error when you truncate the series after terms. The Lagrange remainder is:
for some between and . You don't know the exact value of , but you don't need it. You just need a bound on over that interval.
This theorem connects finite approximation to the infinite series: if as , then the Taylor series converges to at that point.
Compare: Taylor Series vs. Maclaurin Series: both use the same construction principle, but Maclaurin is centered at while Taylor can be centered anywhere. If you need to approximate , a Maclaurin series for works perfectly. For , you'd want a Taylor series centered at .
A Taylor series is useless if you don't know where it actually represents the function. Convergence analysis tells you the set of -values where the infinite sum equals the original function.
The radius of convergence defines how far from the center the series converges. The series converges for and diverges for .
To find , use the Ratio Test:
There are three possibilities: (converges only at the center), (converges everywhere), or some finite positive value.
The interval of convergence extends the radius by specifying what happens at the endpoints. The interval might be , , , or .
Endpoints require separate testing. Plug each endpoint into the series and use convergence tests like the alternating series test or -series test. Many exam problems specifically ask whether endpoints are included, so never skip this step.
Compare: Radius vs. Interval of Convergence: the radius tells you the distance from center where convergence is guaranteed, while the interval specifies the exact set including or excluding endpoints. The series for has , but its interval is because it converges at (alternating harmonic series) but diverges at (negative harmonic series).
Memorizing these common series saves enormous time on exams. Each one is a Maclaurin series derived from the function's derivative pattern.
All coefficients are positive because every derivative of is , and . This series converges for all real (infinite radius of convergence).
It's also a foundation for building other series. Substituting gives the series for , substituting gives , and so on.
Only odd powers appear. This happens because and the even-numbered derivatives of are all zero at . The alternating signs come from the derivative cycle:
Converges for all real .
Only even powers appear because and the odd-numbered derivatives of are all zero at . Differentiating the series term-by-term yields the series, which is a useful check.
Converges for all real .
This converges only for . It's the simplest power series and a surprisingly useful starting point. You can substitute to get the series for , then integrate term-by-term to derive the series for or .
This comes from integrating the series for . It converges on . Notice the endpoint behavior: at you get the alternating harmonic series (converges), but at you get the divergent harmonic series.
Compare: vs. Series: both alternate in sign and converge everywhere, but uses odd powers (starting with ) while uses even powers (starting with 1). If you forget one on an exam, differentiate or integrate the other.
Once you have a Taylor series, you can manipulate it and quantify how well finite approximations work. These techniques turn series from theoretical objects into practical computational tools.
You can differentiate or integrate a power series term-by-term inside its interval of convergence, just as if it were a polynomial.
Differentiation: . The radius of convergence stays the same, though endpoint convergence may change.
Integration: . This is how you derive series for (integrate ) and (integrate ).
Two main tools for bounding error:
Lagrange error bound (works for any Taylor series):
where is the maximum of for between and . The tricky part is finding . For , you can bound the derivative on your interval. For trig functions, all derivatives are bounded by 1.
Alternating series error bound (works only when terms alternate in sign and decrease in absolute value): the error is at most the absolute value of the first omitted term. This is simpler to use when it applies.
A common exam question: "How many terms guarantee error less than 0.001?" You set the appropriate error bound less than 0.001 and solve for .
Every Taylor series is a power series of the form , where the coefficients are .
The uniqueness theorem is important: if a function has a power series representation on an interval, that series must be its Taylor series. This means you can find a Taylor series by any valid method (substitution, multiplication, composition of known series) and be confident it's the right one. You don't always have to compute derivatives directly.
Compare: Lagrange Error vs. Alternating Series Error: Lagrange works for any Taylor series but requires finding a bound on the next derivative. The alternating series bound is simpler but only applies when terms alternate in sign and decrease in magnitude. For approximations, the alternating bound is usually easier. For , you'll need Lagrange.
Taylor series aren't just theoretical. They show up whenever you need to compute, approximate, or simplify.
Numerical approximation: Calculators and computers use Taylor polynomials to evaluate functions like , , and . The factorial in the denominator makes terms shrink fast, so a few terms often give excellent accuracy.
Solving differential equations: When closed-form solutions don't exist, you can assume a power series solution and solve for the coefficients. This technique appears frequently in physics and engineering courses.
Limit evaluation: Replacing functions with their series often simplifies indeterminate forms without repeated applications of L'Hรดpital's Rule.
Compare: Taylor Series vs. L'Hรดpital's Rule for Limits: both handle indeterminate forms, but series substitution often resolves limits in one step that would require multiple L'Hรดpital applications. For , substitute the series: .
| Concept | Best Examples |
|---|---|
| Series centered at | Maclaurin series for , , |
| Infinite radius of convergence | , , |
| Finite radius of convergence | with , with |
| Alternating series | , , |
| All positive terms | , for |
| Odd powers only | , |
| Even powers only | |
| Error estimation | Lagrange remainder, alternating series bound |
What distinguishes a Maclaurin series from a general Taylor series, and when would you choose one over the other for approximating ?
The series for and both converge for all real numbers, yet one has all positive terms while the other alternates. How does this affect which error bound method you'd use for each?
If you know the Taylor series for , how would you find the series for without computing derivatives directly?
Compare the interval of convergence for and . Why do they have the same radius but different endpoint behavior?
An FRQ asks you to approximate with error less than 0.001. Outline the steps you would take, including which series you'd use and how you'd bound the error.