Fiveable

🧚🏽‍♀️Abstract Linear Algebra I Unit 7 Review

QR code for Abstract Linear Algebra I practice questions

7.3 Functions of Matrices and Powers of Matrices

7.3 Functions of Matrices and Powers of Matrices

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🧚🏽‍♀️Abstract Linear Algebra I
Unit & Topic Study Guides

Functions of matrices and powers of matrices are powerful tools in linear algebra. They allow us to extend scalar operations to matrices, opening up new ways to solve complex problems. These concepts are crucial for understanding how matrices behave under various operations.

Diagonalization is key to efficiently computing matrix functions and powers. By breaking down a matrix into simpler components, we can perform calculations more easily and gain insights into the matrix's properties. This connects directly to the broader applications of diagonalization in linear algebra.

Matrix functions through diagonalization

Definition and properties of matrix functions

  • A matrix function f(A)f(A) is defined for a diagonalizable matrix AA as f(A)=Pf(D)P1f(A) = Pf(D)P^{-1}, where A=PDP1A = PDP^{-1} is the diagonalization of AA and f(D)f(D) is the diagonal matrix obtained by applying ff to each diagonal entry of DD
  • The diagonalization approach allows matrix functions to be computed efficiently and simplifies their properties
  • Common matrix functions include the exponential (eA)(e^A), logarithm (log(A))(\log(A)), and trigonometric functions (sin(A),cos(A))(\sin(A), \cos(A))
  • Properties of matrix functions include:
    • f(A)f(B)=f(A+B)f(A)f(B) = f(A+B) for commuting matrices AA and BB (e.g., eAeB=eA+Be^Ae^B = e^{A+B} if AB=BAAB = BA)
    • f(AT)=(f(A))Tf(A^T) = (f(A))^T (e.g., (eA)T=eAT(e^A)^T = e^{A^T})
    • f(PAP1)=Pf(A)P1f(PAP^{-1}) = Pf(A)P^{-1} for invertible PP (e.g., ePAP1=PeAP1e^{PAP^{-1}} = Pe^AP^{-1})

Applications of matrix functions

  • Matrix functions have applications in various fields, such as:
    • Solving systems of linear differential equations (e.g., dxdt=Ax\frac{d\mathbf{x}}{dt} = A\mathbf{x} has solution x(t)=eAtx(0)\mathbf{x}(t) = e^{At}\mathbf{x}(0))
    • Markov processes and stochastic matrices (e.g., the transition matrix after nn steps is given by PnP^n)
    • Control theory and state-space models (e.g., the state transition matrix is eAte^{At})
  • Examples of matrix functions in action:
    • The matrix exponential of a skew-symmetric matrix AA (i.e., AT=AA^T = -A) is an orthogonal matrix (i.e., eA(eA)T=Ie^A(e^A)^T = I)
    • The matrix logarithm can be used to interpolate between two positive definite matrices AA and BB using the geodesic path A(t)=A1tBt=Aexp(tlog(A1B))A(t) = A^{1-t}B^t = A\exp(t\log(A^{-1}B))

Efficient power calculations for matrices

Definition and properties of matrix functions, Matrix (mathematics) - Wikipedia

Diagonalization for computing matrix powers

  • For a diagonalizable matrix A=PDP1A = PDP^{-1}, the kk-th power of AA can be computed as Ak=PDkP1A^k = PD^kP^{-1}, where DkD^k is obtained by raising each diagonal entry of DD to the kk-th power
  • This method is more efficient than directly multiplying AA by itself kk times, especially for large matrices or high powers
  • Negative integer powers can be computed using the inverse: Ak=(A1)k=PDkP1A^{-k} = (A^{-1})^k = PD^{-k}P^{-1}
  • Fractional powers can be computed by taking the appropriate roots of the diagonal entries: A1/k=PD1/kP1A^{1/k} = PD^{1/k}P^{-1}

Examples and applications of matrix powers

  • Computing the nn-th Fibonacci number using the matrix power method:
    • Define A=(1110)A = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}, then An=(Fn+1FnFnFn1)A^n = \begin{pmatrix} F_{n+1} & F_n \\ F_n & F_{n-1} \end{pmatrix}, where FnF_n is the nn-th Fibonacci number
    • Diagonalization of AA allows for efficient computation of AnA^n and, consequently, FnF_n
  • Analyzing the long-term behavior of a Markov chain:
    • The steady-state distribution of a regular Markov chain with transition matrix PP is given by the normalized left eigenvector of PP corresponding to the eigenvalue 1
    • Powers of PP converge to a matrix with identical rows equal to the steady-state distribution

Diagonalization for matrix exponentials and logarithms

Definition and properties of matrix functions, DiagonalizeMatrix | Wolfram Function Repository

Computing matrix exponentials and logarithms

  • The matrix exponential is defined as eA=PeDP1e^A = Pe^DP^{-1}, where eDe^D is the diagonal matrix obtained by applying the scalar exponential function to each diagonal entry of DD
  • The matrix logarithm is the inverse function of the matrix exponential: if eA=Be^A = B, then log(B)=A\log(B) = A. It is defined only for invertible matrices with no negative real eigenvalues
  • Properties of matrix exponentials include:
    • eA+B=eAeBe^{A+B} = e^Ae^B for commuting matrices AA and BB
    • (eA)T=eAT(e^A)^T = e^{A^T}
    • ePAP1=PeAP1e^{PAP^{-1}} = Pe^AP^{-1} for invertible PP

Applications of matrix exponentials and logarithms

  • Matrix exponentials and logarithms have applications in solving systems of linear differential equations, Markov processes, and control theory
  • Examples of matrix exponentials and logarithms in action:
    • The solution to the first-order linear differential equation dxdt=Ax\frac{d\mathbf{x}}{dt} = A\mathbf{x} with initial condition x(0)=x0\mathbf{x}(0) = \mathbf{x}_0 is given by x(t)=eAtx0\mathbf{x}(t) = e^{At}\mathbf{x}_0
    • The matrix logarithm can be used to compute the principal logarithm of a complex number z=reiθz = re^{i\theta} using the matrix identity log(z)=log(r)+iθ=log((r00r))+log((cosθsinθsinθcosθ))\log(z) = \log(r) + i\theta = \log\left(\begin{pmatrix} r & 0 \\ 0 & r \end{pmatrix}\right) + \log\left(\begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}\right)

Solving matrix equations with functions

Techniques for solving matrix equations involving functions

  • Matrix equations involving functions of matrices, such as eAXXeB=Ce^AX - Xe^B = C or Alog(X)+log(X)B=CA\log(X) + \log(X)B = C, can often be solved using diagonalization
  • If AA and BB are diagonalizable with eigenvalue matrices DAD_A and DBD_B, respectively, the equation eAXXeB=Ce^AX - Xe^B = C can be transformed into eDAYYeDB=PA1CPBe^{D_A}Y - Ye^{D_B} = P_A^{-1}CP_B, where X=PAYPB1X = P_AYP_B^{-1}. This transformed equation can be solved for YY, and then XX can be obtained by reversing the transformation
  • Similarly, for the equation Alog(X)+log(X)B=CA\log(X) + \log(X)B = C, diagonalization can be used to transform it into DAZ+ZDB=PA1CPBD_AZ + ZD_B = P_A^{-1}CP_B, where log(X)=PAZPB1\log(X) = P_AZP_B^{-1}. Solve for ZZ and then obtain log(X)\log(X) and, subsequently, XX
  • In some cases, additional techniques such as vectorization or the Kronecker product may be needed to solve the transformed equations

Examples of solving matrix equations with functions

  • Solving the matrix equation eAXXeB=Ce^AX - Xe^B = C:
    • Diagonalize A=PADAPA1A = P_AD_AP_A^{-1} and B=PBDBPB1B = P_BD_BP_B^{-1}
    • Transform the equation to eDAYYeDB=PA1CPBe^{D_A}Y - Ye^{D_B} = P_A^{-1}CP_B, where X=PAYPB1X = P_AYP_B^{-1}
    • Solve the transformed equation for YY by equating corresponding entries
    • Obtain the solution X=PAYPB1X = P_AYP_B^{-1}
  • Solving the matrix equation Alog(X)+log(X)B=CA\log(X) + \log(X)B = C:
    • Diagonalize A=PADAPA1A = P_AD_AP_A^{-1} and B=PBDBPB1B = P_BD_BP_B^{-1}
    • Transform the equation to DAZ+ZDB=PA1CPBD_AZ + ZD_B = P_A^{-1}CP_B, where log(X)=PAZPB1\log(X) = P_AZP_B^{-1}
    • Solve the transformed equation for ZZ by equating corresponding entries
    • Obtain log(X)=PAZPB1\log(X) = P_AZP_B^{-1} and then compute X=elog(X)X = e^{\log(X)}