Fiveable

📡Advanced Signal Processing Unit 1 Review

QR code for Advanced Signal Processing practice questions

1.6 Laplace transform

1.6 Laplace transform

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📡Advanced Signal Processing
Unit & Topic Study Guides

The Laplace transform converts time-domain functions into the complex frequency domain, making it far easier to analyze linear systems and solve differential equations. Where the Fourier transform handles steady-state frequency content, the Laplace transform captures transient behavior and stability information through its complex variable ss. This guide covers the transform's definition, properties, common transform pairs, inverse methods, and its applications to system analysis, stability, transfer functions, and convolution.

Definition of Laplace transform

The Laplace transform is an integral transform that maps a time-domain function to a function of the complex variable s=σ+jωs = \sigma + j\omega. By doing so, it converts differential equations into algebraic equations, which are much simpler to manipulate. It's the backbone of linear system analysis in signal processing and control theory.

Laplace transform formula

The unilateral (one-sided) Laplace transform of a function f(t)f(t) is defined as:

F(s)=0f(t)estdtF(s) = \int_0^{\infty} f(t)e^{-st} \, dt

where s=σ+jωs = \sigma + j\omega is a complex variable.

The idea: you multiply f(t)f(t) by a decaying exponential este^{-st} and integrate over all positive time. The exponential acts as a weighting function. For values of ss where the integral converges, you get a well-defined function F(s)F(s) in the complex frequency domain.

The set of ss values for which this integral converges is called the region of convergence (ROC), and it matters for uniquely identifying the time-domain signal from its transform.

Laplace transform properties

These properties let you handle operations on signals without going back to the integral definition each time:

  • Linearity: L[af(t)+bg(t)]=aF(s)+bG(s)\mathcal{L}[af(t) + bg(t)] = aF(s) + bG(s), where aa and bb are constants. You can transform sums term by term.
  • Time shifting: If L[f(t)]=F(s)\mathcal{L}[f(t)] = F(s), then L[f(ta)u(ta)]=easF(s)\mathcal{L}[f(t-a)u(t-a)] = e^{-as}F(s) for a>0a > 0. A delay in time corresponds to multiplication by ease^{-as} in the ss-domain. Note the inclusion of the unit step u(ta)u(t-a) to maintain causality.
  • Frequency shifting (ss-domain shifting): L[eatf(t)]=F(sa)\mathcal{L}[e^{at}f(t)] = F(s-a). Multiplying by an exponential in time shifts the transform along the real axis in the ss-plane.
  • Differentiation in time: L[f(t)]=sF(s)f(0)\mathcal{L}[f'(t)] = sF(s) - f(0^-). Each derivative brings down a factor of ss and subtracts an initial condition term. For the second derivative: L[f(t)]=s2F(s)sf(0)f(0)\mathcal{L}[f''(t)] = s^2F(s) - sf(0^-) - f'(0^-). This is exactly why the Laplace transform is so useful for solving differential equations with initial conditions.
  • Integration in time: L[0tf(τ)dτ]=F(s)s\mathcal{L}\left[\int_0^t f(\tau)\,d\tau\right] = \frac{F(s)}{s}. Integration in time becomes division by ss.
  • Convolution: L[f(t)g(t)]=F(s)G(s)\mathcal{L}[f(t) * g(t)] = F(s)G(s). Convolution in time becomes multiplication in the ss-domain.

Laplace transform vs Fourier transform

Both transforms move signals into a frequency domain, but they serve different purposes and have different scopes:

  • The Fourier transform uses a purely imaginary frequency variable jωj\omega and requires that the signal be absolutely integrable (or at least have finite energy). It characterizes steady-state frequency content.
  • The Laplace transform uses the complex variable s=σ+jωs = \sigma + j\omega. The real part σ\sigma provides an exponential convergence factor, which means the Laplace transform can handle signals that grow over time (like e2te^{2t}) as long as σ\sigma is chosen large enough.
  • The Fourier transform is a special case of the Laplace transform: evaluate F(s)F(s) along the imaginary axis (s=jωs = j\omega), and you get the Fourier transform, provided the ROC includes the jωj\omega axis.
  • The Fourier transform gives magnitude and phase vs. frequency. The Laplace transform additionally encodes information about transient behavior and stability through the real part of ss.

The Fourier transform tells you what frequencies are present. The Laplace transform tells you that plus whether those components are growing, decaying, or sustained.

Laplace transform of common signals

Knowing these standard transform pairs by heart saves enormous time. They serve as building blocks for more complex signals.

Laplace transform of unit step function

The unit step function (Heaviside function) is:

u(t)={0,t<01,t0u(t) = \begin{cases} 0, & t < 0 \\ 1, & t \geq 0 \end{cases}

Its Laplace transform is:

L[u(t)]=1s,Re(s)>0\mathcal{L}[u(t)] = \frac{1}{s}, \quad \text{Re}(s) > 0

This is one of the most frequently used pairs. The unit step models the sudden onset of a constant signal, like flipping a switch at t=0t = 0.

Laplace transform of exponential function

For f(t)=eatu(t)f(t) = e^{at}u(t), where aa is a real or complex constant:

L[eatu(t)]=1sa,Re(s)>Re(a)\mathcal{L}[e^{at}u(t)] = \frac{1}{s-a}, \quad \text{Re}(s) > \text{Re}(a)

  • When a<0a < 0, the signal decays and the pole at s=as = a sits in the left half-plane (stable).
  • When a>0a > 0, the signal grows exponentially and the pole is in the right half-plane (unstable).

This transform pair is the foundation for understanding how pole locations relate to time-domain behavior.

Laplace transform of sine and cosine functions

For sinusoidal signals multiplied by the unit step:

L[sin(ωt)u(t)]=ωs2+ω2\mathcal{L}[\sin(\omega t)\,u(t)] = \frac{\omega}{s^2 + \omega^2}

L[cos(ωt)u(t)]=ss2+ω2\mathcal{L}[\cos(\omega t)\,u(t)] = \frac{s}{s^2 + \omega^2}

Both have poles at s=±jωs = \pm j\omega, which sit on the imaginary axis. This makes sense: pure sinusoids neither grow nor decay, so they're marginally stable.

For damped sinusoids like eαtsin(ωt)e^{-\alpha t}\sin(\omega t), apply the frequency shifting property to get:

L[eαtsin(ωt)u(t)]=ω(s+α)2+ω2\mathcal{L}[e^{-\alpha t}\sin(\omega t)\,u(t)] = \frac{\omega}{(s+\alpha)^2 + \omega^2}

Inverse Laplace transform

The inverse Laplace transform recovers the time-domain signal from its ss-domain representation. In practice, you'll rarely evaluate the formal integral directly. Instead, you'll use lookup tables combined with algebraic techniques.

Definition of inverse Laplace transform

The formal definition is:

f(t)=12πjγjγ+jF(s)estdsf(t) = \frac{1}{2\pi j} \int_{\gamma - j\infty}^{\gamma + j\infty} F(s)e^{st}\,ds

Here, γ\gamma is a real constant chosen so the vertical contour of integration lies within the ROC of F(s)F(s). This is a contour integral in the complex plane, and while it's important to know it exists, you'll almost always use one of the practical methods below.

Inverse Laplace transform methods

Three main approaches:

  • Partial fraction expansion: Decompose F(s)F(s) into simpler terms that match entries in a standard transform table. This is the most common method for rational functions of ss.
  • Residue theorem: Use complex analysis to evaluate the contour integral by computing residues at the poles of F(s)estF(s)e^{st}. Useful when partial fractions become unwieldy.
  • Convolution method: Express F(s)F(s) as a product F1(s)F2(s)F_1(s) \cdot F_2(s), invert each factor separately, then convolve the results in the time domain.

For most problems in this course, partial fraction expansion is the go-to technique.

Partial fraction expansion for inverse Laplace transform

Here's the step-by-step process:

  1. Ensure F(s)F(s) is a proper fraction (degree of numerator < degree of denominator). If not, perform polynomial long division first.

  2. Factor the denominator into linear terms (spi)(s - p_i) and irreducible quadratic terms (s2+bs+c)(s^2 + bs + c).

  3. Set up the partial fraction form. For each distinct linear factor (spi)(s - p_i), include a term Aispi\frac{A_i}{s - p_i}. For each repeated linear factor (spi)n(s - p_i)^n, include terms A1spi+A2(spi)2++An(spi)n\frac{A_1}{s - p_i} + \frac{A_2}{(s - p_i)^2} + \cdots + \frac{A_n}{(s - p_i)^n}. For each irreducible quadratic, include Bs+Cs2+bs+c\frac{Bs + C}{s^2 + bs + c}.

  4. Solve for the unknown coefficients by multiplying both sides by the denominator and either substituting convenient values of ss (the "cover-up" method) or equating coefficients of like powers of ss.

  5. Invert each term using a table of standard Laplace transform pairs.

Example: Find the inverse Laplace transform of F(s)=2s+3(s+1)(s2+4)F(s) = \frac{2s+3}{(s+1)(s^2+4)}.

Set up: F(s)=As+1+Bs+Cs2+4F(s) = \frac{A}{s+1} + \frac{Bs+C}{s^2+4}

Multiply through by (s+1)(s2+4)(s+1)(s^2+4):

2s+3=A(s2+4)+(Bs+C)(s+1)2s + 3 = A(s^2+4) + (Bs+C)(s+1)

Setting s=1s = -1: 2(1)+3=A(1+4)2(-1)+3 = A(1+4) gives A=15A = \frac{1}{5}.

Expanding and equating coefficients for the remaining unknowns gives BB and CC. Each resulting term maps directly to a known transform pair (exponential, sine, cosine).

Applications of Laplace transform

Laplace transform in linear systems analysis

A linear time-invariant (LTI) system obeys superposition: the response to a sum of inputs equals the sum of individual responses. The Laplace transform exploits this by converting the system's governing differential equation into an algebraic equation.

If the system has impulse response h(t)h(t), then the output for any input x(t)x(t) is the convolution y(t)=x(t)h(t)y(t) = x(t) * h(t). In the Laplace domain, this becomes simple multiplication:

Y(s)=X(s)H(s)Y(s) = X(s) \cdot H(s)

This is far easier to compute than evaluating the convolution integral directly, especially for higher-order systems.

Laplace transform for solving differential equations

The Laplace transform turns a differential equation with initial conditions into an algebraic equation. Here's the general approach:

  1. Take the Laplace transform of every term in the differential equation, using the differentiation property to handle derivatives. Initial conditions appear naturally as constants.
  2. Solve the resulting algebraic equation for Y(s)Y(s).
  3. Apply the inverse Laplace transform (typically via partial fractions) to get y(t)y(t).

Example: Solve y(t)+3y(t)+2y(t)=0y''(t) + 3y'(t) + 2y(t) = 0 with y(0)=1y(0) = 1, y(0)=0y'(0) = 0.

Taking the Laplace transform: s2Y(s)s0+3[sY(s)1]+2Y(s)=0s^2Y(s) - s - 0 + 3[sY(s) - 1] + 2Y(s) = 0

Solving: Y(s)=s+3s2+3s+2=s+3(s+1)(s+2)Y(s) = \frac{s+3}{s^2+3s+2} = \frac{s+3}{(s+1)(s+2)}

Partial fractions and inverse transform yield: y(t)=2ete2ty(t) = 2e^{-t} - e^{-2t}

This method handles initial conditions automatically, which is a major advantage over Fourier-based approaches.

Laplace transform formula, How do you find the Inverse Laplace transformation for a product of $2$ functions? - Mathematics ...

Laplace transform in control systems

Control systems use feedback to regulate a process. The Laplace transform is central to their analysis and design because:

  • Transfer functions H(s)=Y(s)X(s)H(s) = \frac{Y(s)}{X(s)} compactly describe system dynamics as ratios of polynomials in ss.
  • Block diagram algebra lets you combine transfer functions of subsystems (series, parallel, feedback) using simple algebraic rules.
  • Stability, transient response, and steady-state error can all be determined from the poles and zeros of H(s)H(s) without simulating the system in time.

For instance, a second-order system with transfer function H(s)=ωn2s2+2ζωns+ωn2H(s) = \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2} has its behavior entirely characterized by the natural frequency ωn\omega_n and damping ratio ζ\zeta.

Laplace transform and system stability

Stability determines whether a system's output stays bounded when the input is bounded (BIBO stability). The Laplace transform makes stability analysis straightforward by linking it to pole locations.

Poles and zeros in Laplace domain

  • Poles are values of ss where H(s)H(s) \to \infty (roots of the denominator).
  • Zeros are values of ss where H(s)=0H(s) = 0 (roots of the numerator).

The stability rule for causal LTI systems:

  • Stable: All poles have negative real parts (left half-plane). Time-domain modes decay.
  • Marginally stable: Poles on the imaginary axis with no repeated poles there. Time-domain modes are sustained oscillations or constants.
  • Unstable: Any pole in the right half-plane, or repeated poles on the imaginary axis. Time-domain modes grow without bound.

Each pole at s=σ+jωs = \sigma + j\omega contributes a time-domain component proportional to eσte^{\sigma t}. If σ<0\sigma < 0, it decays. If σ>0\sigma > 0, it grows. That's the direct link between pole location and stability.

Stability criteria using Laplace transform

You don't always need to find the poles explicitly. The Routh-Hurwitz criterion determines whether all roots of the characteristic polynomial lie in the left half-plane by examining only the polynomial's coefficients.

Routh-Hurwitz stability criterion

The Routh-Hurwitz criterion applies to the characteristic equation (denominator of the transfer function set to zero). For a polynomial ansn+an1sn1++a1s+a0=0a_ns^n + a_{n-1}s^{n-1} + \cdots + a_1s + a_0 = 0:

  1. Necessary condition: All coefficients aia_i must be positive (assuming an>0a_n > 0). If any coefficient is zero or negative, the system is not stable.

  2. Construct the Routh array. The first two rows contain the coefficients of even and odd powers of ss:

    • Row sns^n: an,an2,an4,a_n, a_{n-2}, a_{n-4}, \ldots
    • Row sn1s^{n-1}: an1,an3,an5,a_{n-1}, a_{n-3}, a_{n-5}, \ldots
  3. Compute subsequent rows using the formula: each entry is a determinant-based combination of entries from the two rows above, divided by the first element of the row directly above.

  4. Check the first column. The system is stable if and only if all entries in the first column of the Routh array are positive (same sign). The number of sign changes in the first column equals the number of roots in the right half-plane.

Special cases (a zero in the first column, or an entire row of zeros) require additional handling but indicate marginal stability or symmetric root patterns.

Laplace transform and transfer functions

Definition of transfer function

The transfer function of an LTI system is defined as:

H(s)=Y(s)X(s)H(s) = \frac{Y(s)}{X(s)}

where Y(s)Y(s) and X(s)X(s) are the Laplace transforms of the output and input, respectively, with all initial conditions set to zero.

The transfer function is a property of the system itself, independent of any particular input. It's also the Laplace transform of the impulse response: H(s)=L[h(t)]H(s) = \mathcal{L}[h(t)].

Laplace transform for deriving transfer functions

To derive a transfer function from a system's differential equation:

  1. Write the differential equation relating input x(t)x(t) and output y(t)y(t).
  2. Take the Laplace transform of both sides, setting all initial conditions to zero.
  3. Collect terms and solve for Y(s)X(s)\frac{Y(s)}{X(s)}.

For example, given y(t)+5y(t)+6y(t)=2x(t)+x(t)y''(t) + 5y'(t) + 6y(t) = 2x'(t) + x(t):

s2Y(s)+5sY(s)+6Y(s)=2sX(s)+X(s)s^2Y(s) + 5sY(s) + 6Y(s) = 2sX(s) + X(s)

H(s)=Y(s)X(s)=2s+1s2+5s+6=2s+1(s+2)(s+3)H(s) = \frac{Y(s)}{X(s)} = \frac{2s + 1}{s^2 + 5s + 6} = \frac{2s+1}{(s+2)(s+3)}

The poles at s=2s = -2 and s=3s = -3 are both in the left half-plane, so this system is stable.

Bode plots using Laplace transform

Bode plots display the frequency response of a system as two separate graphs: magnitude and phase, both plotted against frequency on a logarithmic scale.

To create a Bode plot from a transfer function:

  1. Substitute s=jωs = j\omega into H(s)H(s) to get H(jω)H(j\omega).
  2. Compute the magnitude in decibels: 20log10H(jω)20\log_{10}|H(j\omega)|.
  3. Compute the phase: H(jω)\angle H(j\omega) in degrees.
  4. Plot both against ω\omega on a log scale.

Bode plots reveal important system characteristics at a glance:

  • Bandwidth: the frequency range where the system passes signals effectively.
  • Gain margin and phase margin: measures of how close the system is to instability in a feedback configuration.
  • Resonant peaks: frequencies where the system amplifies signals.
  • Roll-off rate: how quickly the system attenuates signals beyond its bandwidth (e.g., 20-20 dB/decade per pole).

Laplace transform and convolution

Convolution in time domain

For causal LTI systems, the output y(t)y(t) is the convolution of the input x(t)x(t) with the impulse response h(t)h(t):

(xh)(t)=0tx(τ)h(tτ)dτ(x * h)(t) = \int_0^{t} x(\tau)h(t-\tau)\,d\tau

You can think of this as flipping h(τ)h(\tau), sliding it across x(τ)x(\tau), and computing the area of overlap at each time tt. For complex signals, evaluating this integral directly is tedious.

Convolution theorem for Laplace transform

The convolution theorem states:

L[(fg)(t)]=F(s)G(s)\mathcal{L}[(f * g)(t)] = F(s) \cdot G(s)

Convolution in time becomes multiplication in the ss-domain. This is one of the most practically important properties of the Laplace transform, because multiplication is far simpler than integration.

The reverse also holds: multiplication in time corresponds to a more complex operation (convolution-like) in the ss-domain, though this direction is used less frequently.

Laplace transform for solving convolution problems

Given input x(t)x(t) and impulse response h(t)h(t), find the output y(t)y(t):

  1. Compute X(s)=L[x(t)]X(s) = \mathcal{L}[x(t)] and H(s)=L[h(t)]H(s) = \mathcal{L}[h(t)].
  2. Multiply: Y(s)=X(s)H(s)Y(s) = X(s) \cdot H(s).
  3. Apply the inverse Laplace transform: y(t)=L1[Y(s)]y(t) = \mathcal{L}^{-1}[Y(s)].

This three-step process replaces the convolution integral entirely. It's especially powerful when X(s)X(s) and H(s)H(s) are rational functions, because their product is also rational and can be inverted via partial fractions.

Laplace transform and initial value theorem

The initial and final value theorems let you extract time-domain information directly from F(s)F(s) without performing the full inverse transform.

Statement of initial value theorem

If f(t)f(t) and its derivative f(t)f'(t) are both Laplace transformable, then:

limt0+f(t)=limssF(s)\lim_{t \to 0^+} f(t) = \lim_{s \to \infty} sF(s)

provided the limit on the right exists. This gives you the initial value of f(t)f(t) by examining the behavior of sF(s)sF(s) as ss grows large.

Why it works: As ss \to \infty, the exponential este^{-st} in the Laplace integral decays so rapidly that only the behavior of f(t)f(t) near t=0t = 0 contributes.

For completeness, the final value theorem is:

limtf(t)=lims0sF(s)\lim_{t \to \infty} f(t) = \lim_{s \to 0} sF(s)

This is valid only if f(t)f(t) converges to a finite limit (all poles of sF(s)sF(s) must be in the left half-plane, except possibly a simple pole at s=0s = 0). Applying it to an unstable or oscillatory system gives a meaningless result.

The initial value theorem checks behavior at t=0+t = 0^+ via ss \to \infty. The final value theorem checks behavior at tt \to \infty via s0s \to 0. Both are quick sanity checks on your ss-domain expressions.