Fiveable

🎛️Control Theory Unit 1 Review

QR code for Control Theory practice questions

1.3 Laplace transforms

1.3 Laplace transforms

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

The Laplace transform converts time-domain functions into a complex frequency domain, turning differential equations into algebraic ones. This is the core technique that makes most of control theory workable. Without it, analyzing even moderately complex systems would require solving differential equations by hand every time.

This guide covers the definition and existence conditions, key properties, common transform pairs, inverse transform techniques, and how Laplace transforms are applied throughout control systems.

Definition of Laplace transforms

The Laplace transform takes a function of time f(t)f(t) and maps it to a function F(s)F(s) of the complex variable s=σ+jωs = \sigma + j\omega. The formal definition is:

F(s)=L{f(t)}=0f(t)estdtF(s) = \mathcal{L}\{f(t)\} = \int_0^{\infty} f(t)e^{-st} dt

The integral multiplies f(t)f(t) by a decaying exponential este^{-st} and sums over all time from 0 to infinity. The result is a function of ss that encodes the same information as f(t)f(t), but in a form where calculus operations become algebra.

Laplace transform vs inverse Laplace transform

  • The Laplace transform converts a time-domain function f(t)f(t) into a complex frequency-domain function F(s)F(s)
  • The inverse Laplace transform converts F(s)F(s) back into the original time-domain function f(t)f(t)

These are denoted as:

  • Laplace transform: L{f(t)}=F(s)\mathcal{L}\{f(t)\} = F(s)
  • Inverse Laplace transform: L1{F(s)}=f(t)\mathcal{L}^{-1}\{F(s)\} = f(t)

The two operations undo each other. You'll typically transform into the ss-domain to do your analysis or algebra, then invert back to get the time-domain answer.

Laplace transform of derivatives

This is where the real power shows up. Derivatives in time become polynomial expressions in ss:

  • L{f(t)}=sF(s)f(0)\mathcal{L}\{f'(t)\} = sF(s) - f(0)
  • L{f(t)}=s2F(s)sf(0)f(0)\mathcal{L}\{f''(t)\} = s^2F(s) - sf(0) - f'(0)

The pattern continues for higher-order derivatives: each additional derivative multiplies by another factor of ss and subtracts initial condition terms. This converts a differential equation into an algebraic equation in ss, which you can solve with standard algebra.

Laplace transform of integrals

Integration in time becomes division by ss:

L{0tf(τ)dτ}=F(s)s\mathcal{L}\left\{\int_0^t f(\tau) \, d\tau\right\} = \frac{F(s)}{s}

This is useful for solving integro-differential equations (equations that contain both derivatives and integrals of the unknown function) and for analyzing steady-state behavior.

Existence of Laplace transforms

Not every function has a Laplace transform. For F(s)F(s) to exist, f(t)f(t) must satisfy two conditions:

  1. Piecewise continuity: f(t)f(t) must be piecewise continuous on every finite interval in [0,)[0, \infty). A finite number of jump discontinuities is fine.
  2. Exponential order: There must exist constants M>0M > 0 and α\alpha such that f(t)Meαt|f(t)| \leq Me^{\alpha t} for all sufficiently large tt. This means f(t)f(t) can't grow faster than some exponential.

These conditions guarantee that the improper integral defining the transform converges. Functions like et2e^{t^2} grow too fast and don't have a Laplace transform, but most signals you'll encounter in control theory (exponentials, sinusoids, polynomials times exponentials) satisfy both conditions.

Properties of Laplace transforms

These properties let you manipulate transforms algebraically instead of re-doing the integral every time. Each property connects a time-domain operation to a simpler frequency-domain operation.

Linearity of Laplace transforms

The Laplace transform is a linear operator:

L{af(t)+bg(t)}=aF(s)+bG(s)\mathcal{L}\{af(t) + bg(t)\} = aF(s) + bG(s)

This means you can break a complex function into simpler pieces, transform each one separately, and add the results. Linearity is the reason Laplace transforms work so well for linear systems.

Frequency shifting in Laplace transforms

Multiplying by an exponential in time shifts the transform in ss:

L{eatf(t)}=F(sa)\mathcal{L}\{e^{at}f(t)\} = F(s-a)

If you know the transform of f(t)f(t), you get the transform of eatf(t)e^{at}f(t) for free by replacing ss with sas - a. This comes up constantly when dealing with damped oscillations or systems with exponential decay/growth factors.

Time scaling in Laplace transforms

Compressing or stretching time scales the transform:

L{f(at)}=1aF(sa)\mathcal{L}\{f(at)\} = \frac{1}{a}F\left(\frac{s}{a}\right)

Here a>0a > 0. Speeding up a signal in time (larger aa) spreads out its frequency content, and vice versa.

Time shifting in Laplace transforms

Delaying a function by aa seconds multiplies its transform by an exponential:

L{f(ta)u(ta)}=easF(s)\mathcal{L}\{f(t-a)u(t-a)\} = e^{-as}F(s)

The u(ta)u(t-a) is the unit step function shifted to t=at = a, which ensures the delayed signal stays zero before the delay kicks in. This property is essential for modeling transport delays and systems where inputs arrive after some lag.

Differentiation in Laplace domain

This repeats the derivative property from earlier, listed here as a formal property:

  • L{f(t)}=sF(s)f(0)\mathcal{L}\{f'(t)\} = sF(s) - f(0)
  • L{f(t)}=s2F(s)sf(0)f(0)\mathcal{L}\{f''(t)\} = s^2F(s) - sf(0) - f'(0)

Differentiation in time corresponds to multiplication by ss (plus initial condition terms). There's also a dual property: differentiating in the ss-domain corresponds to multiplication by t-t in time:

L{tf(t)}=dF(s)ds\mathcal{L}\{tf(t)\} = -\frac{dF(s)}{ds}

Laplace transform vs inverse Laplace transform, transformation - How do digital filters work in time domain? - Mathematics Stack Exchange

Integration in Laplace domain

Integration in time corresponds to division by ss:

L{0tf(τ)dτ}=F(s)s\mathcal{L}\left\{\int_0^t f(\tau) \, d\tau\right\} = \frac{F(s)}{s}

This is particularly handy for computing step responses, since the step response is the integral of the impulse response.

Convolution in Laplace domain

Convolution in time becomes multiplication in the ss-domain:

L{(fg)(t)}=F(s)G(s)\mathcal{L}\{(f * g)(t)\} = F(s)G(s)

where the convolution is defined as (fg)(t)=0tf(τ)g(tτ)dτ(f * g)(t) = \int_0^t f(\tau)g(t-\tau) \, d\tau.

This is why transfer functions work. The output of an LTI system is the convolution of the input with the impulse response. In the ss-domain, that convolution becomes simple multiplication: Y(s)=G(s)U(s)Y(s) = G(s)U(s).

Laplace transform tables

Transform tables are your best friend for working with Laplace transforms. Rather than evaluating the integral from scratch, you look up known pairs and use properties to handle variations.

Laplace transforms of common functions

Time-domain f(t)f(t)Laplace transform F(s)F(s)
δ(t)\delta(t) (impulse)11
u(t)u(t) (unit step)1s\frac{1}{s}
tt (ramp)1s2\frac{1}{s^2}
tnt^nn!sn+1\frac{n!}{s^{n+1}}
eate^{at}1sa\frac{1}{s-a}
sin(ωt)\sin(\omega t)ωs2+ω2\frac{\omega}{s^2 + \omega^2}
cos(ωt)\cos(\omega t)ss2+ω2\frac{s}{s^2 + \omega^2}
eatsin(ωt)e^{at}\sin(\omega t)ω(sa)2+ω2\frac{\omega}{(s-a)^2 + \omega^2}
eatcos(ωt)e^{at}\cos(\omega t)sa(sa)2+ω2\frac{s-a}{(s-a)^2 + \omega^2}
Memorizing at least the first seven of these will save you significant time on exams and homework.

Laplace transforms of periodic functions

For a periodic function f(t)f(t) with period TT, you only need the transform of one period. If F1(s)F_1(s) is the Laplace transform of f(t)f(t) over the interval [0,T)[0, T) (with f(t)=0f(t) = 0 outside that interval), then:

L{f(t)}=F1(s)1esT\mathcal{L}\{f(t)\} = \frac{F_1(s)}{1 - e^{-sT}}

This works for square waves, sawtooth waves, triangular waves, and any other periodic signal. The denominator 1esT1 - e^{-sT} accounts for the infinite repetition.

Laplace transforms of special functions

Two special functions appear constantly in control theory:

The Dirac delta function δ(t)\delta(t) models an idealized impulse (infinite amplitude, zero duration, unit area):

L{δ(t)}=1\mathcal{L}\{\delta(t)\} = 1

Its transform being simply 1 is why the impulse response of a system equals the inverse transform of the transfer function itself.

The unit ramp function r(t)=tu(t)r(t) = tu(t) represents a signal that increases linearly from zero:

L{r(t)}=1s2\mathcal{L}\{r(t)\} = \frac{1}{s^2}

Step, ramp, and impulse inputs are the three standard test signals used to characterize system behavior.

Applications of Laplace transforms

Laplace transforms for solving ODEs

Laplace transforms provide a systematic method for solving linear ODEs with initial conditions. The procedure is:

  1. Transform both sides of the ODE using the Laplace transform. Apply the derivative property to handle yy', yy'', etc., substituting in the given initial conditions.
  2. Solve for Y(s)Y(s) algebraically. Collect all terms involving Y(s)Y(s) on one side and solve.
  3. Invert using partial fractions and the transform table to get y(t)y(t).

For example, to solve y+3y+2y=0y'' + 3y' + 2y = 0 with y(0)=1y(0) = 1 and y(0)=0y'(0) = 0:

  • Transforming: s2Y(s)s0+3[sY(s)1]+2Y(s)=0s^2Y(s) - s - 0 + 3[sY(s) - 1] + 2Y(s) = 0
  • Simplifying: Y(s)(s2+3s+2)=s+3Y(s)(s^2 + 3s + 2) = s + 3
  • Solving: Y(s)=s+3(s+1)(s+2)Y(s) = \frac{s+3}{(s+1)(s+2)}
  • Partial fractions and inversion give the time-domain solution.

This approach handles initial conditions automatically, which is a major advantage over the method of undetermined coefficients.

Laplace transforms for system analysis

In control theory, LTI systems are characterized by their transfer function in the ss-domain. Taking the Laplace transform of the governing differential equations (with zero initial conditions) yields an algebraic relationship between input and output.

The transfer function captures everything about the system's input-output dynamics: stability, transient behavior, steady-state response, and frequency characteristics. Once you have G(s)G(s), you can determine the response to any input by computing Y(s)=G(s)U(s)Y(s) = G(s)U(s) and inverting.

Transfer functions in Laplace domain

The transfer function of an LTI system is defined as:

G(s)=Y(s)U(s)G(s) = \frac{Y(s)}{U(s)}

where Y(s)Y(s) is the Laplace transform of the output and U(s)U(s) is the Laplace transform of the input, both with zero initial conditions.

For a system governed by a2y+a1y+a0y=b1u+b0ua_2 y'' + a_1 y' + a_0 y = b_1 u' + b_0 u, the transfer function is:

G(s)=b1s+b0a2s2+a1s+a0G(s) = \frac{b_1 s + b_0}{a_2 s^2 + a_1 s + a_0}

The roots of the numerator are called zeros and the roots of the denominator are called poles. Poles and zeros together determine the system's behavior.

Laplace transform vs inverse Laplace transform, Laplace transform - Wikipedia

Stability analysis using Laplace transforms

A system's stability is determined by the locations of its transfer function's poles in the complex ss-plane:

  • Stable: All poles have negative real parts (left half-plane). Transients decay to zero.
  • Marginally stable: Poles on the imaginary axis with no repeated poles there. Transients neither grow nor decay.
  • Unstable: Any pole with a positive real part (right half-plane), or repeated poles on the imaginary axis. Transients grow without bound.

When the transfer function is high-order and factoring the denominator is difficult, the Routh-Hurwitz criterion lets you determine whether all poles are in the left half-plane without actually finding them. Root locus methods provide a graphical way to track pole locations as a design parameter (like controller gain) varies.

Frequency response using Laplace transforms

The frequency response describes how a system responds to sinusoidal inputs at different frequencies. You obtain it by evaluating the transfer function along the imaginary axis:

G(jω)=G(s)s=jωG(j\omega) = G(s)\big|_{s=j\omega}

The result is a complex number for each frequency ω\omega. Its magnitude G(jω)|G(j\omega)| tells you the gain (how much the system amplifies or attenuates that frequency), and its angle G(jω)\angle G(j\omega) tells you the phase shift.

Bode plots display magnitude (in dB) and phase (in degrees) versus frequency on a logarithmic scale. These plots reveal bandwidth, resonant peaks, roll-off rate, and gain/phase margins, all of which are critical for controller design.

Inverse Laplace transforms

Getting back from the ss-domain to the time domain is where you actually extract your answer. Several techniques exist, but partial fraction expansion is by far the most commonly used in practice.

Definition of inverse Laplace transforms

The formal definition is the Bromwich integral:

f(t)=L1{F(s)}=12πjγjγ+jF(s)estdsf(t) = \mathcal{L}^{-1}\{F(s)\} = \frac{1}{2\pi j} \int_{\gamma-j\infty}^{\gamma+j\infty} F(s)e^{st} \, ds

where γ\gamma is a real constant chosen so the contour of integration lies to the right of all singularities of F(s)F(s). You'll rarely evaluate this integral directly. Instead, you'll use the techniques below.

Partial fraction expansion for inverse Laplace

This is the workhorse method. It breaks a complicated rational function into a sum of simple terms you can look up in a table.

Steps:

  1. Check degrees. If the numerator degree is greater than or equal to the denominator degree, perform polynomial long division first to get a polynomial plus a proper fraction.

  2. Factor the denominator into linear factors (spi)(s - p_i) and irreducible quadratic factors (s2+bs+c)(s^2 + bs + c).

  3. Set up partial fractions. For each distinct real pole pip_i, write a term Aispi\frac{A_i}{s - p_i}. For a repeated pole of multiplicity mm, write terms A1sp+A2(sp)2++Am(sp)m\frac{A_1}{s-p} + \frac{A_2}{(s-p)^2} + \cdots + \frac{A_m}{(s-p)^m}. For each irreducible quadratic, write Bs+Cs2+bs+c\frac{Bs + C}{s^2 + bs + c}.

  4. Solve for coefficients using the Heaviside cover-up method (for distinct real poles) or by multiplying both sides by the denominator and matching coefficients.

  5. Invert each term using the transform table.

Residue theorem for inverse Laplace

The residue theorem from complex analysis provides an alternative: the inverse Laplace transform equals the sum of residues of F(s)estF(s)e^{st} at all poles of F(s)F(s).

For a simple (non-repeated) pole at s=sks = s_k:

Res[F(s)est,sk]=limssk(ssk)F(s)est\text{Res}[F(s)e^{st}, s_k] = \lim_{s \to s_k} (s - s_k) F(s)e^{st}

For a pole of multiplicity mkm_k:

Res[F(s)est,sk]=1(mk1)!limsskdmk1dsmk1[(ssk)mkF(s)est]\text{Res}[F(s)e^{st}, s_k] = \frac{1}{(m_k-1)!} \lim_{s \to s_k} \frac{d^{m_k-1}}{ds^{m_k-1}} \left[(s - s_k)^{m_k} F(s)e^{st}\right]

This method is most useful when you have high-order repeated poles where partial fractions become tedious.

Bromwich integral for inverse Laplace

The Bromwich integral is the theoretical foundation for the inverse Laplace transform. While you won't typically evaluate it by hand, it's important to know that:

  • It guarantees uniqueness: if two continuous functions have the same Laplace transform, they're the same function.
  • It can be evaluated using contour integration by closing the contour in the left half-plane and applying the residue theorem, which is how the residue method above is derived.

Numerical methods for inverse Laplace

When F(s)F(s) is too complex for analytical inversion (non-rational expressions, transcendental functions, or functions known only numerically), numerical methods approximate f(t)f(t):

  • Gaver-Stehfest algorithm: Approximates f(t)f(t) using a weighted sum of F(s)F(s) evaluated at specific real values of ss. Simple to implement but limited in accuracy for oscillatory functions.
  • Talbot algorithm: Deforms the Bromwich contour into a shape that improves numerical convergence. More accurate than Gaver-Stehfest for a wider class of functions.
  • Fourier series method (Dubner-Abate): Approximates the inverse transform using a truncated Fourier series expansion.

These are primarily used in computational tools rather than by-hand calculations.

Laplace transforms in control systems

Laplace transforms for modeling systems

Physical systems governed by linear differential equations (mechanical, electrical, thermal, fluid) can all be modeled using transfer functions. The process is:

  1. Write the governing differential equations from physics (Newton's laws, Kirchhoff's laws, etc.).
  2. Take the Laplace transform of each equation, assuming zero initial conditions.
  3. Solve for the transfer function G(s)=Y(s)/U(s)G(s) = Y(s)/U(s).

Once you have transfer functions for individual components, you can connect them using block diagrams and signal flow graphs. Series connections multiply transfer functions, parallel connections add them, and feedback loops follow the standard formula:

Y(s)R(s)=G(s)1+G(s)H(s)\frac{Y(s)}{R(s)} = \frac{G(s)}{1 + G(s)H(s)}

where G(s)G(s) is the forward path and H(s)H(s) is the feedback path. This algebraic manipulation of system models is only possible because Laplace transforms convert convolution (the actual physical operation) into multiplication.