Fiveable

🔦Electrical Circuits and Systems II Unit 12 Review

QR code for Electrical Circuits and Systems II practice questions

12.3 Solution of state equations

12.3 Solution of state equations

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🔦Electrical Circuits and Systems II
Unit & Topic Study Guides

State Transition Matrix and Matrix Exponential

State equations describe how a system's internal variables change over time. Solving them is how you predict a circuit's behavior from any set of initial conditions and inputs. The two main pieces are the state transition matrix (which captures the system's natural dynamics) and the convolution integral (which captures the effect of external inputs).

Fundamental Concepts of State Transition

The state transition matrix Φ(t)\Phi(t) describes how the state vector evolves over time when there's no input. For a linear time-invariant (LTI) system, it equals the matrix exponential eAte^{At}, where AA is the system matrix.

You can compute eAte^{At} using its power series expansion:

eAt=I+At+(At)22!+(At)33!+e^{At} = I + At + \frac{(At)^2}{2!} + \frac{(At)^3}{3!} + \cdots

In practice, you rarely sum this series by hand. Instead, you'll use eigenvalue methods or Laplace transforms (covered below) to find a closed-form expression.

Key properties of Φ(t)\Phi(t) that show up constantly:

  • Φ(0)=I\Phi(0) = I (at t=0t = 0, the state hasn't moved yet)
  • Φ(t1+t2)=Φ(t1)Φ(t2)\Phi(t_1 + t_2) = \Phi(t_1)\Phi(t_2) (semigroup property)
  • Φ(t)=Φ1(t)\Phi(-t) = \Phi^{-1}(t) (time reversal inverts the matrix)

Eigenvalue and Eigenvector Analysis

The most common way to compute eAte^{At} for small systems is through eigenvalue decomposition.

  1. Find the eigenvalues λ\lambda by solving the characteristic equation det(λIA)=0\det(\lambda I - A) = 0.

  2. For each eigenvalue λi\lambda_i, find the corresponding eigenvector viv_i by solving (λiIA)vi=0(\lambda_i I - A)v_i = 0.

  3. Form the eigenvector matrix V=[v1  v2    vn]V = [v_1 \; v_2 \; \cdots \; v_n] and the diagonal eigenvalue matrix Λ=diag(λ1,λ2,,λn)\Lambda = \text{diag}(\lambda_1, \lambda_2, \ldots, \lambda_n).

  4. The state transition matrix is then Φ(t)=VeΛtV1\Phi(t) = V e^{\Lambda t} V^{-1}, where eΛt=diag(eλ1t,eλ2t,,eλnt)e^{\Lambda t} = \text{diag}(e^{\lambda_1 t}, e^{\lambda_2 t}, \ldots, e^{\lambda_n t}).

This works cleanly when all eigenvalues are distinct. If you have repeated eigenvalues, you need the Jordan canonical form, which introduces polynomial-times-exponential terms like teλtt e^{\lambda t} on the off-diagonal blocks.

The eigenvalues themselves tell you the system's natural modes. In a circuit context, they correspond to the natural frequencies you'd find from the characteristic equation of the differential equation.

Fundamental Concepts of State Transition, Power series of a function with multiplication - Mathematics Stack Exchange

Homogeneous and Particular Solutions

The complete solution to x˙(t)=Ax(t)+Bu(t)\dot{x}(t) = Ax(t) + Bu(t) has two parts: the homogeneous (zero-input) response and the particular (zero-state) response.

Homogeneous Solution

When the input u(t)=0u(t) = 0, the state equation reduces to x˙(t)=Ax(t)\dot{x}(t) = Ax(t), and the solution is:

xh(t)=eAtx(0)x_h(t) = e^{At}x(0)

This is the system's natural response to initial conditions alone. Stability depends entirely on the eigenvalues of AA:

  • All eigenvalues have negative real parts → asymptotically stable (response decays to zero)
  • Some eigenvalues have zero real parts, none positive → marginally stable (response neither grows nor decays)
  • Any eigenvalue has a positive real part → unstable (response grows without bound)

For circuit systems, negative real parts correspond to energy dissipation through resistors. A purely imaginary eigenvalue pair means sustained oscillation (like an ideal LC circuit with no resistance).

Fundamental Concepts of State Transition, calculus - Find a power series representation for the function. (Assume $a > 0$.) - Mathematics ...

Particular Solution

When the input u(t)0u(t) \neq 0, you need to account for how the input drives the system. The general particular solution uses the convolution integral:

xp(t)=0teA(tτ)Bu(τ)dτx_p(t) = \int_0^t e^{A(t-\tau)} B u(\tau) \, d\tau

The full solution combining both parts is:

x(t)=eAtx(0)zero-input response+0teA(tτ)Bu(τ)dτzero-state responsex(t) = \underbrace{e^{At}x(0)}_{\text{zero-input response}} + \underbrace{\int_0^t e^{A(t-\tau)} B u(\tau) \, d\tau}_{\text{zero-state response}}

For simple inputs (step, ramp, sinusoidal), you can sometimes use the method of undetermined coefficients to guess the form of the particular solution and solve for its coefficients. For arbitrary inputs, the convolution integral or Laplace transform approach is the way to go.

Alternative Solution Methods

Laplace Transform Approach

The Laplace transform converts the state equation from a matrix differential equation into a matrix algebra problem, which is often easier to handle.

  1. Take the Laplace transform of x˙(t)=Ax(t)+Bu(t)\dot{x}(t) = Ax(t) + Bu(t): sX(s)x(0)=AX(s)+BU(s)sX(s) - x(0) = AX(s) + BU(s)

  2. Rearrange to isolate X(s)X(s): (sIA)X(s)=x(0)+BU(s)(sI - A)X(s) = x(0) + BU(s)

  3. Solve for X(s)X(s): X(s)=(sIA)1x(0)+(sIA)1BU(s)X(s) = (sI - A)^{-1}x(0) + (sI - A)^{-1}BU(s)

  4. Take the inverse Laplace transform to get x(t)x(t).

The matrix (sIA)1(sI - A)^{-1} is called the resolvent matrix. Its inverse Laplace transform gives you the state transition matrix: eAt=L1{(sIA)1}e^{At} = \mathcal{L}^{-1}\{(sI - A)^{-1}\}. This is actually one of the most practical ways to compute eAte^{At} for 2×2 and 3×3 systems.

This approach also connects state-space analysis to transfer function analysis. The transfer function matrix is H(s)=C(sIA)1B+DH(s) = C(sI - A)^{-1}B + D, which bridges the gap between the state-space and frequency-domain representations you've used in earlier units.

Numerical Solution Techniques

When systems are nonlinear, time-varying, or just large enough that closed-form solutions become impractical, numerical methods take over.

Euler's method is the simplest approach. Given the current state, you step forward by a small time increment hh:

x(t+h)x(t)+h[Ax(t)+Bu(t)]x(t + h) \approx x(t) + h[Ax(t) + Bu(t)]

This is a first-order method, so errors accumulate quickly unless hh is very small.

Runge-Kutta methods (especially RK4) provide much better accuracy by evaluating the derivative at multiple points within each step, then taking a weighted average. RK4 is fourth-order accurate, meaning the error per step scales as h5h^5.

In practice, you'll typically use tools like MATLAB's ode45, which implements an adaptive-step Runge-Kutta algorithm. It automatically adjusts the step size: smaller steps where the solution changes rapidly, larger steps where it's smooth. Simulink provides a block-diagram environment for simulating state-space models visually, which is especially useful for verifying your analytical solutions against numerical results.