Fiveable

🧚🏽‍♀️Abstract Linear Algebra I Unit 7 Review

QR code for Abstract Linear Algebra I practice questions

7.2 Diagonalization Process and Spectral Decomposition

7.2 Diagonalization Process and Spectral Decomposition

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🧚🏽‍♀️Abstract Linear Algebra I
Unit & Topic Study Guides

Diagonalization and spectral decomposition are powerful tools for understanding matrix transformations. They break down complex matrices into simpler components, revealing their fundamental structure and behavior.

These techniques are crucial for solving various problems in linear algebra and beyond. By expressing matrices in terms of eigenvalues and eigenvectors, we gain insights into their properties and can simplify calculations in many applications.

Diagonalizing Matrices with Eigenvectors

The Diagonalization Process

  • Diagonalization finds a diagonal matrix DD similar to a given square matrix AA, such that A=PDP1A = PDP^{-1}, where PP is an invertible matrix
  • A matrix AA is diagonalizable if and only if it has nn linearly independent eigenvectors, where nn is the size of the matrix
    • Example: A 3x3 matrix is diagonalizable if it has 3 linearly independent eigenvectors
  • To diagonalize a matrix AA, first find its eigenvalues by solving the characteristic equation det(AλI)=0det(A - λI) = 0, where λλ represents the eigenvalues and II is the identity matrix
    • Example: For a 2x2 matrix A=(1234)A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, the characteristic equation is det(AλI)=(1λ)(4λ)6=0det(A - λI) = (1 - λ)(4 - λ) - 6 = 0

Finding Eigenvectors and Eigenspaces

  • For each distinct eigenvalue λiλ_i, find the corresponding eigenvectors by solving the equation (AλiI)v=0(A - λ_i I)v = 0, where vv represents the eigenvectors
    • Example: If λ1=2λ_1 = 2 is an eigenvalue of matrix AA, solve (A2I)v=0(A - 2I)v = 0 to find the eigenvectors associated with λ1λ_1
  • The eigenvectors corresponding to each distinct eigenvalue form a basis for the eigenspace associated with that eigenvalue
    • The eigenspace is the set of all vectors vv that satisfy (AλiI)v=0(A - λ_i I)v = 0 for a given eigenvalue λiλ_i
    • The dimension of the eigenspace is equal to the geometric multiplicity of the corresponding eigenvalue

Diagonal and Invertible Matrices for Diagonalization

The Diagonalization Process, linear algebra - Finding eigenvalues and eigenvectors of 3x3 matrix - Mathematics Stack Exchange

Constructing the Diagonal Matrix

  • The diagonal matrix DD is constructed by placing the eigenvalues of AA along the main diagonal in any order, with each eigenvalue appearing as many times as its algebraic multiplicity
    • Example: If the eigenvalues of AA are λ1=2λ_1 = 2 with multiplicity 2 and λ2=3λ_2 = 3 with multiplicity 1, then D=(200020003)D = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{pmatrix}
  • The size of the diagonal matrix DD is the same as the size of the original matrix AA

Constructing the Invertible Matrix

  • The invertible matrix PP, also known as the modal matrix, is constructed by arranging the linearly independent eigenvectors of AA as its columns, in the same order as their corresponding eigenvalues in DD
    • Example: If the eigenvectors of AA are v1=(10)v_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix} and v2=(01)v_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, then P=(1001)P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}
  • The columns of PP must be linearly independent for PP to be invertible. If AA has repeated eigenvalues, ensure that the corresponding eigenvectors are linearly independent
  • The matrix P1P^{-1} is the inverse of PP, and it can be found using various methods such as Gaussian elimination or the adjugate matrix divided by the determinant of PP
    • Example: If P=(1234)P = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}, then P1=1det(P)(4231)=(213212)P^{-1} = \frac{1}{det(P)} \begin{pmatrix} 4 & -2 \\ -3 & 1 \end{pmatrix} = \begin{pmatrix} -2 & 1 \\ \frac{3}{2} & -\frac{1}{2} \end{pmatrix}

Spectral Decomposition of Matrices

The Diagonalization Process, linear algebra - Find eigenvalues given A and eigenvectors - Mathematics Stack Exchange

The Spectral Decomposition Theorem

  • The spectral decomposition theorem states that if AA is an n×nn × n symmetric matrix with distinct eigenvalues λ1,λ2,...,λnλ_1, λ_2, ..., λ_n and corresponding orthonormal eigenvectors v1,v2,...,vnv_1, v_2, ..., v_n, then AA can be expressed as A=λ1v1v1T+λ2v2v2T+...+λnvnvnTA = λ_1 v_1 v_1^T + λ_2 v_2 v_2^T + ... + λ_n v_n v_n^T
    • Example: If A=(2003)A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} with eigenvalues λ1=2λ_1 = 2 and λ2=3λ_2 = 3, and orthonormal eigenvectors v1=(10)v_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix} and v2=(01)v_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, then A=2(10)(10)+3(01)(01)A = 2 \begin{pmatrix} 1 \\ 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \end{pmatrix} + 3 \begin{pmatrix} 0 \\ 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \end{pmatrix}
  • Each term in the spectral decomposition, λiviviTλ_i v_i v_i^T, is a rank-one matrix, as it is the outer product of a column vector (viv_i) with its transpose (viTv_i^T)

Matrix Form of the Spectral Decomposition

  • The spectral decomposition can be written in matrix form as A=PDPTA = PDP^T, where PP is an orthogonal matrix whose columns are the orthonormal eigenvectors of AA, and DD is a diagonal matrix with the eigenvalues of AA on its main diagonal
    • Example: If A=(2003)A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}, P=(1001)P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, and D=(2003)D = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}, then A=PDPT=(1001)(2003)(1001)A = PDP^T = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}
  • If AA is not symmetric, the spectral decomposition theorem does not apply directly. However, a similar decomposition can be obtained using the singular value decomposition (SVD)

Geometric and Algebraic Interpretation of Spectral Decomposition

Geometric Interpretation

  • Geometrically, the spectral decomposition can be interpreted as a transformation of the standard basis vectors by the matrix AA, followed by a scaling of each transformed vector by its corresponding eigenvalue
    • Example: If A=(2003)A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}, the standard basis vectors (10)\begin{pmatrix} 1 \\ 0 \end{pmatrix} and (01)\begin{pmatrix} 0 \\ 1 \end{pmatrix} are transformed by AA and then scaled by the eigenvalues 2 and 3, respectively
  • The eigenvectors represent the principal directions or axes of the transformation, while the eigenvalues represent the scaling factors along these axes
    • Example: In a 2D transformation, the eigenvectors may represent the directions of stretching or compression, while the eigenvalues indicate the amount of stretching or compression along those directions

Algebraic Interpretation

  • Algebraically, the spectral decomposition expresses a matrix as a linear combination of rank-one matrices, each of which represents a specific contribution to the overall transformation
    • Example: In the spectral decomposition A=λ1v1v1T+λ2v2v2TA = λ_1 v_1 v_1^T + λ_2 v_2 v_2^T, each term λiviviTλ_i v_i v_i^T represents a rank-one matrix contributing to the transformation described by AA
  • The magnitude of each eigenvalue indicates the significance of its corresponding eigenvector in the transformation. Larger eigenvalues have a more significant impact on the transformation than smaller eigenvalues
    • Example: If λ1=10λ_1 = 10 and λ2=0.1λ_2 = 0.1, the transformation described by AA is primarily determined by the eigenvector corresponding to λ1λ_1, as it has a much larger scaling factor
  • The spectral decomposition provides insight into the underlying structure of a matrix and can be used to analyze properties such as matrix powers, exponentials, and functions of matrices
    • Example: The matrix exponential eAe^A can be easily computed using the spectral decomposition as eA=PeDPTe^A = Pe^DP^T, where eDe^D is a diagonal matrix with the exponentials of the eigenvalues on its main diagonal