Limit laws let you break a complex limit into simpler pieces that you can evaluate individually. Once you know these rules, you can handle most limit problems in Calculus I by combining a few basic moves: direct substitution, algebraic simplification, and (when those fail) special techniques like the squeeze theorem.
Limit Laws and Techniques
Fundamental limit laws
The core idea is this: if and both exist, you can split up a complicated limit into parts. Each law below requires that the individual limits exist.
- Sum law:
- Difference law:
- Product law:
- Quotient law: , provided
- Power law: for any positive integer
- Root law: for any positive integer (and if is even, the limit inside must be positive)
- Constant multiple rule:
There's also a composition law: if and is continuous at , then . This comes up whenever you have a function nested inside another, like .
Limits of polynomial functions
Polynomials are continuous everywhere, so you can always evaluate their limits by direct substitution: just plug in the value of .
For example:
Rational functions (a polynomial divided by a polynomial) also allow direct substitution, as long as the denominator isn't zero at that point. When you plug in and get , that's an indeterminate form, which signals that you need algebraic work before you can find the limit.

Simplifying complex limit expressions
When direct substitution gives you , try these techniques:
Factoring and canceling: Factor the numerator and denominator, then cancel the common factor that's causing the zero.
You can cancel because limits care about what happens near , not at .
Multiplying by the conjugate: When square roots are involved, multiply the numerator and denominator by the conjugate to eliminate the radical.
The conjugate trick works because , which removes the square root from the numerator.
Application of the squeeze theorem
The squeeze theorem (also called the sandwich theorem) is your go-to when algebraic simplification won't work, especially with oscillating functions like sine and cosine.
The setup: if for all near (except possibly at itself), and , then .
You're "squeezing" between two functions that both converge to the same value, so has no choice but to converge there too.
Classic example: Show that .
- You know for all .
- Multiply through by (which is non-negative): .
- Both and .
- By the squeeze theorem, .

Specific limit strategies by function type
- Polynomial functions: Direct substitution always works.
- Rational functions: Try direct substitution first. If you get , factor and cancel, or multiply by a conjugate.
- Trigonometric functions: Use the squeeze theorem or key identities. Two limits you should memorize: and .
- Exponential and logarithmic functions: These are continuous on their domains, so direct substitution typically works. Use properties of exponents and logarithms to simplify first when needed.
Function behavior near points
One-sided limits describe what happens as approaches from just one direction:
- From the left:
- From the right:
The two-sided limit exists only if both one-sided limits exist and are equal. If they disagree, the two-sided limit does not exist, which indicates some kind of discontinuity at .
Infinite limits occur when function values grow without bound. For example:
Since the one-sided limits aren't equal (and aren't even finite), the two-sided limit does not exist. The graph has a vertical asymptote at .
Continuity and limit definitions
A function is continuous at if three things hold:
- is defined.
- exists.
- .
This is exactly why direct substitution works for polynomials and other continuous functions: condition 3 says the limit equals the function value.
The epsilon-delta definition gives a rigorous way to prove a limit statement. It says if for every , there exists a such that whenever , we have . You may or may not need to write epsilon-delta proofs in Calc I, but understanding the idea (making as close to as you want by keeping close enough to ) helps the concept of a limit make sense.
The Intermediate Value Theorem (IVT) states that if is continuous on and is any value between and , then there exists some in with . This is often used to show that an equation has a solution: if is negative and is positive (or vice versa), then must cross zero somewhere between and .