Fiveable

🎛️Control Theory Unit 1 Review

QR code for Control Theory practice questions

1.5 Complex variables

1.5 Complex variables

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎛️Control Theory
Unit & Topic Study Guides

Complex numbers

Complex numbers extend real numbers by introducing the imaginary unit ii, where i2=1i^2 = -1. A complex number packages two pieces of information into one entity, which makes it possible to represent both amplitude and phase simultaneously. In control theory, this is essential: transfer functions, frequency responses, and stability criteria all rely on complex variable techniques.

Real and imaginary parts

A complex number zz is written as z=a+biz = a + bi, where aa is the real part and bb is the imaginary part. You can extract these using the notation (z)=a\Re(z) = a and (z)=b\Im(z) = b.

On the complex plane, aa corresponds to the horizontal axis and bb to the vertical axis. So the complex number z=3+4iz = 3 + 4i sits at the point (3,4)(3, 4).

Modulus and argument

The modulus (absolute value) of z=a+biz = a + bi measures its distance from the origin:

z=a2+b2|z| = \sqrt{a^2 + b^2}

For example, 3+4i=9+16=5|3 + 4i| = \sqrt{9 + 16} = 5.

The argument (phase) is the angle from the positive real axis to the vector pointing at zz:

arg(z)=atan2(b,a)\arg(z) = \text{atan2}(b, a)

Note: the simple formula arctan(b/a)\arctan(b/a) only works when a>0a > 0. For the general case (any quadrant), use atan2(b,a)\text{atan2}(b, a), which accounts for the sign of both aa and bb. Together, the modulus and argument describe a complex number by its magnitude and direction rather than its coordinates.

Polar and exponential forms

Instead of rectangular form z=a+biz = a + bi, you can write a complex number in polar form:

z=r(cosθ+isinθ)z = r(\cos\theta + i\sin\theta)

where r=zr = |z| and θ=arg(z)\theta = \arg(z).

The exponential form uses Euler's formula (eiθ=cosθ+isinθe^{i\theta} = \cos\theta + i\sin\theta) to write this more compactly:

z=reiθz = re^{i\theta}

Why bother with these forms? Multiplication and division become much simpler. To multiply two complex numbers, you multiply their moduli and add their arguments. To divide, you divide moduli and subtract arguments. Converting fluently between rectangular, polar, and exponential forms is a skill you'll use constantly in control theory, especially when working with transfer functions evaluated at s=jωs = j\omega.

Complex plane

The complex plane (also called the Argand plane) maps every complex number to a point in two dimensions. The horizontal axis is the real axis, and the vertical axis is the imaginary axis.

Argand diagram

An Argand diagram simply plots complex numbers as points or vectors on the complex plane. For z=a+biz = a + bi:

  • The point sits at coordinates (a,b)(a, b)
  • The distance from the origin to the point equals z|z|
  • The angle from the positive real axis to the line connecting the origin to the point equals arg(z)\arg(z)

Graphical representation of complex numbers

You can represent a complex number either as a point (a,b)(a, b) or as a vector from the origin to that point. The vector view is especially useful for visualizing operations:

  • Addition: place vectors tip-to-tail (parallelogram rule)
  • Subtraction: reverse the second vector, then add
  • Multiplication: multiply lengths, add angles
  • Division: divide lengths, subtract angles

These graphical intuitions help when you're interpreting pole-zero plots or Nyquist diagrams in later units.

Complex functions

A complex function maps complex numbers from one plane (the domain) to another (the codomain). In control theory, the transfer function G(s)G(s) is a complex function of the complex variable ss. Understanding how these functions behave is key to analyzing system stability and performance.

Analytic functions

A function is analytic (or holomorphic) at a point if it's complex-differentiable in an entire neighborhood around that point. This is a much stronger requirement than real differentiability. For real functions, being differentiable at a point says nothing about nearby points. For complex functions, analyticity at a point guarantees differentiability everywhere nearby and even guarantees the function can be represented as a convergent power series.

Common analytic functions: polynomials, eze^z, sin(z)\sin(z), cos(z)\cos(z). Analytic functions also preserve angles under mapping (conformal property), which matters for the mapping techniques covered later.

Cauchy-Riemann equations

The Cauchy-Riemann equations give you a concrete test for analyticity. Write a complex function as f(z)=u(x,y)+iv(x,y)f(z) = u(x, y) + iv(x, y), where uu and vv are real-valued functions. Then ff is analytic if and only if:

  • ux=vy\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}
  • uy=vx\frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}

(assuming the partial derivatives are continuous). These equations also let you compute the complex derivative: f(z)=ux+ivxf'(z) = \frac{\partial u}{\partial x} + i\frac{\partial v}{\partial x}.

Real and imaginary parts, Plot complex numbers on the complex plane | College Algebra

Harmonic functions

A real-valued function is harmonic if it's twice continuously differentiable and satisfies Laplace's equation:

2f=2fx2+2fy2=0\nabla^2 f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} = 0

Here's the connection to analytic functions: if f(z)=u(x,y)+iv(x,y)f(z) = u(x, y) + iv(x, y) is analytic, then both uu and vv are automatically harmonic. This follows directly from the Cauchy-Riemann equations. Harmonic functions have useful properties like the mean value property (the value at any point equals the average over any surrounding circle), which shows up in boundary value problems relevant to control design.

Complex integration

Complex integration extends ordinary integration to functions of a complex variable along paths in the complex plane. It's a powerful technique that connects to the residue theorem and, through that, to practical tools like inverse Laplace transforms and stability analysis.

Contour integrals

A contour integral evaluates a complex function f(z)f(z) along a curve CC in the complex plane:

Cf(z)dz\int_C f(z)\, dz

To compute this, you parametrize the curve CC using z(t)z(t) for t[a,b]t \in [a, b], then evaluate:

Cf(z)dz=abf(z(t))z(t)dt\int_C f(z)\, dz = \int_a^b f(z(t))\, z'(t)\, dt

In control theory, contour integrals appear when computing inverse Laplace transforms and when applying the Nyquist stability criterion.

Cauchy's integral theorem

If f(z)f(z) is analytic throughout a simply connected domain DD, then for any closed curve CC inside DD:

Cf(z)dz=0\oint_C f(z)\, dz = 0

"Simply connected" means the domain has no holes. This theorem tells you that the integral of an analytic function around a closed path is zero, which means the value of a contour integral between two points doesn't depend on which path you take (as long as ff is analytic in the region between the paths).

Cauchy's integral formula

Building on the integral theorem, Cauchy's integral formula lets you recover the value of an analytic function at any interior point from its values on a surrounding contour:

f(z0)=12πiCf(z)zz0dzf(z_0) = \frac{1}{2\pi i} \oint_C \frac{f(z)}{z - z_0}\, dz

where CC is any simple closed curve enclosing z0z_0, and ff is analytic on and inside CC. This remarkable result means that knowing an analytic function on a boundary determines it everywhere inside. It's also the foundation for deriving Taylor series of analytic functions and for the residue theorem.

Residue theorem

The residue theorem is one of the most practically useful results in complex analysis. It relates a contour integral of a meromorphic function (analytic everywhere except at isolated poles) to the sum of its residues:

Cf(z)dz=2πik=1nRes(f,zk)\oint_C f(z)\, dz = 2\pi i \sum_{k=1}^n \text{Res}(f, z_k)

where z1,z2,,znz_1, z_2, \ldots, z_n are the poles enclosed by CC. This converts a potentially difficult integral into a sum of residues, which are often straightforward to compute.

Singularities and residues

A singularity is a point where f(z)f(z) fails to be analytic. The main types:

  • Removable singularity: the function can be redefined at the point to make it analytic (e.g., sinzz\frac{\sin z}{z} at z=0z = 0)
  • Pole of order nn: the function blows up, but (zz0)nf(z)(z - z_0)^n f(z) remains analytic near z0z_0
  • Essential singularity: neither removable nor a pole (e.g., e1/ze^{1/z} at z=0z = 0)

The residue of f(z)f(z) at a pole z0z_0 is the coefficient of (zz0)1(z - z_0)^{-1} in the Laurent series expansion of ff around z0z_0. It captures the "essential contribution" of that singularity to any contour integral enclosing it.

Calculation of residues

For a simple pole (order 1) at z0z_0:

Res(f,z0)=limzz0(zz0)f(z)\text{Res}(f, z_0) = \lim_{z \to z_0} (z - z_0) f(z)

For a pole of order nn at z0z_0:

Res(f,z0)=1(n1)!limzz0dn1dzn1[(zz0)nf(z)]\text{Res}(f, z_0) = \frac{1}{(n-1)!} \lim_{z \to z_0} \frac{d^{n-1}}{dz^{n-1}} \left[(z - z_0)^n f(z)\right]

Alternatively, you can expand f(z)f(z) into its Laurent series around z0z_0 and read off the coefficient of (zz0)1(z - z_0)^{-1} directly.

Application to real integrals

The residue theorem can evaluate difficult real integrals by extending them into the complex plane. The general strategy:

  1. Identify the real integral you want to evaluate
  2. Extend the integrand to a complex function f(z)f(z)
  3. Choose a contour in the complex plane that includes the real-axis portion corresponding to your integral
  4. Close the contour so that contributions from the added portions either vanish or are computable (often by letting a semicircular arc's radius go to infinity)
  5. Apply the residue theorem to evaluate the closed contour integral
  6. Extract the value of the original real integral

Common contour choices include semicircles in the upper or lower half-plane (for integrals over (,)(-\infty, \infty)) and keyhole contours (for integrands with branch cuts along the positive real axis).

Real and imaginary parts, Plot complex numbers on the complex plane | College Algebra

Conformal mapping

A conformal mapping is an analytic function whose derivative is nonzero, which guarantees it preserves angles between curves. These mappings let you transform complicated regions into simpler ones while preserving the local geometric structure, which is useful for simplifying control system analysis.

Preservation of angles

The defining feature: if two curves cross at a point in the original domain, the angle between them is the same after the mapping. This follows from the Cauchy-Riemann equations, which ensure the Jacobian of the transformation acts locally as a rotation combined with a uniform scaling. The only requirement is that the derivative of the mapping is nonzero at the point in question.

Bilinear transformations

A bilinear (Möbius) transformation has the form:

w=az+bcz+dw = \frac{az + b}{cz + d}

where a,b,c,da, b, c, d are complex constants with adbc0ad - bc \neq 0. These transformations map the extended complex plane (including the point at infinity) onto itself and always map circles and lines to circles and lines.

In control theory, bilinear transformations are used to map between different analysis domains. For instance, the Cayley transform maps the left half-plane (where stable continuous-time poles live) to the interior of the unit disk (where stable discrete-time poles live). This connection is central to converting between continuous-time and discrete-time system representations.

Mapping of regions

Conformal maps can transform one region into another with simpler geometry. For example, the Joukowsky transformation:

w=12(z+1z)w = \frac{1}{2}\left(z + \frac{1}{z}\right)

maps the exterior of the unit disk to the upper half-plane. By mapping a system's transfer function domain to a simpler region, stability analysis and controller design become more tractable.

Laplace transform

The Laplace transform converts a time-domain function f(t)f(t) into a function of the complex variable ss:

F(s)=L{f(t)}=0f(t)estdtF(s) = \mathcal{L}\{f(t)\} = \int_0^{\infty} f(t)\, e^{-st}\, dt

This is arguably the single most important tool in classical control theory. It turns differential equations into algebraic equations, making linear time-invariant (LTI) system analysis far more manageable.

Definition and properties

Key properties that make the Laplace transform so useful:

  • Linearity: L{af(t)+bg(t)}=aF(s)+bG(s)\mathcal{L}\{af(t) + bg(t)\} = aF(s) + bG(s)
  • Time shifting: L{f(ta)u(ta)}=easF(s)\mathcal{L}\{f(t-a)\,u(t-a)\} = e^{-as}F(s), where u(t)u(t) is the unit step function
  • Frequency shifting: L{eatf(t)}=F(sa)\mathcal{L}\{e^{at}f(t)\} = F(s-a)
  • Differentiation: L{f(t)}=sF(s)f(0)\mathcal{L}\{f'(t)\} = sF(s) - f(0)
  • Integration: L{0tf(τ)dτ}=1sF(s)\mathcal{L}\left\{\int_0^t f(\tau)\, d\tau\right\} = \frac{1}{s}F(s)

The differentiation property is especially important: it replaces derivatives with multiplication by ss, which is exactly how differential equations become algebraic equations in the ss-domain.

Inverse Laplace transform

The inverse Laplace transform recovers f(t)f(t) from F(s)F(s):

f(t)=L1{F(s)}=12πiγiγ+iF(s)estdsf(t) = \mathcal{L}^{-1}\{F(s)\} = \frac{1}{2\pi i} \int_{\gamma - i\infty}^{\gamma + i\infty} F(s)\, e^{st}\, ds

where γ\gamma is a real constant chosen so that the integration path lies to the right of all singularities of F(s)F(s). This is called the Bromwich integral.

In practice, you'll rarely evaluate this integral directly. Instead, you'll use:

  • Partial fraction expansion: decompose F(s)F(s) into simpler terms whose inverse transforms you can look up in a table
  • Residue theorem: compute the Bromwich integral by closing the contour and summing residues
  • Convolution theorem: L1{F(s)G(s)}=f(t)g(t)\mathcal{L}^{-1}\{F(s)G(s)\} = f(t) * g(t)

Application to differential equations

Here's the standard process for solving a linear ODE with the Laplace transform:

  1. Take the Laplace transform of both sides of the differential equation, using the differentiation property to handle derivatives (initial conditions get folded in automatically)
  2. Solve the resulting algebraic equation for F(s)F(s)
  3. Apply the inverse Laplace transform to get f(t)f(t)

For example, to solve y+3y+2y=0y'' + 3y' + 2y = 0 with y(0)=1y(0) = 1, y(0)=0y'(0) = 0: the Laplace transform gives (s2+3s+2)Y(s)=s+3(s^2 + 3s + 2)Y(s) = s + 3, so Y(s)=s+3(s+1)(s+2)Y(s) = \frac{s+3}{(s+1)(s+2)}. Partial fractions and inverse transforms then yield the time-domain solution. This technique is the backbone of transfer function analysis in control theory.

Fourier transform

The Fourier transform decomposes a time-domain signal into its frequency components:

F(ω)=F{f(t)}=f(t)eiωtdtF(\omega) = \mathcal{F}\{f(t)\} = \int_{-\infty}^{\infty} f(t)\, e^{-i\omega t}\, dt

where ω\omega is the angular frequency. While the Laplace transform uses a complex variable s=σ+jωs = \sigma + j\omega and integrates from 00 to \infty, the Fourier transform evaluates along the imaginary axis (s=jωs = j\omega) and integrates over all time.

Definition and properties

The Fourier transform shares many properties with the Laplace transform:

  • Linearity: F{af(t)+bg(t)}=aF(ω)+bG(ω)\mathcal{F}\{af(t) + bg(t)\} = aF(\omega) + bG(\omega)
  • Time shifting: F{f(tt0)}=eiωt0F(ω)\mathcal{F}\{f(t - t_0)\} = e^{-i\omega t_0}F(\omega)
  • Frequency shifting: F{eiω0tf(t)}=F(ωω0)\mathcal{F}\{e^{i\omega_0 t}f(t)\} = F(\omega - \omega_0)
  • Differentiation: F{f(t)}=iωF(ω)\mathcal{F}\{f'(t)\} = i\omega F(\omega)
  • Convolution: F{fg}=F(ω)G(ω)\mathcal{F}\{f * g\} = F(\omega) \cdot G(\omega)

The Fourier transform is particularly suited for analyzing the steady-state frequency response of stable systems. For an LTI system with transfer function G(s)G(s), evaluating G(jω)G(j\omega) gives the frequency response directly, telling you how the system amplifies or attenuates each frequency and how much phase shift it introduces. This is the foundation of Bode plots and frequency-domain design methods.