Fiveable

๐Ÿ“ˆLinear Algebra 101 Unit 2 Review

QR code for Linear Algebra 101 practice questions

2.4 Explain the steps for diagonalizing a matrix

2.4 Explain the steps for diagonalizing a matrix

Written by the Fiveable Content Team โ€ข Last updated August 2025
Written by the Fiveable Content Team โ€ข Last updated August 2025

Diagonalizing matrices is a key technique in linear algebra. It involves finding a diagonal matrix similar to the original matrix, using eigenvalues and eigenvectors. This process simplifies complex matrix operations and reveals important properties of linear transformations.

Understanding diagonalization connects to the broader concept of linear transformations. It shows how matrices can be decomposed and represented in simpler forms, making it easier to analyze their behavior and solve related problems in various applications.

Diagonalizing matrices

Steps for diagonalization

  • Diagonalizing a matrix involves finding a diagonal matrix DD that is similar to the original square matrix AA
    • This means finding an invertible matrix PP such that Pโˆ’1AP=DP^{-1}AP = D
  • The columns of matrix PP are the eigenvectors of AA and the diagonal entries of DD are the corresponding eigenvalues
  • For an nร—nn \times n matrix to be diagonalizable, it must have nn linearly independent eigenvectors
    • Matrices that do not satisfy this are not diagonalizable
  • The process of diagonalization involves three main steps:
    1. Find the eigenvalues of the matrix AA by solving the characteristic equation detโก(Aโˆ’ฮปI)=0\det(A - \lambda I) = 0
    2. For each distinct eigenvalue ฮป\lambda, find its corresponding eigenvectors by solving (Aโˆ’ฮปI)x=0(A - \lambda I)x = 0
    3. Form the matrices PP and DD using the eigenvectors and eigenvalues, then verify that Pโˆ’1AP=DP^{-1}AP = D

Conditions for diagonalizability

  • A matrix is diagonalizable if and only if it has a full set of linearly independent eigenvectors
    • For an nร—nn \times n matrix, this means having nn linearly independent eigenvectors
  • Equivalently, a matrix is diagonalizable if and only if the algebraic multiplicity equals the geometric multiplicity for each eigenvalue
    • Algebraic multiplicity: the number of times an eigenvalue appears as a root of the characteristic polynomial
    • Geometric multiplicity: the dimension of the eigenspace (the space of all eigenvectors) for an eigenvalue
  • Examples of matrices that are not diagonalizable:
    • Matrices with repeated eigenvalues but not enough linearly independent eigenvectors (defective matrices)
    • Matrices with non-real eigenvalues (e.g., rotation matrices)

Eigenvalues and eigenvectors

Steps for diagonalization, linear algebra - Diagonalization and Commuting Matrices - Mathematics Stack Exchange

Finding eigenvalues

  • Eigenvalues ฮป\lambda are scalar values that satisfy the equation Ax=ฮปxAx = \lambda x for some nonzero vector xx
    • The vector xx is called an eigenvector corresponding to ฮป\lambda
  • To find the eigenvalues, set up and solve the characteristic equation detโก(Aโˆ’ฮปI)=0\det(A - \lambda I) = 0
    • This will give a polynomial in ฮป\lambda
    • The roots of this polynomial (found by setting the equation equal to zero and solving) are the eigenvalues of AA
  • The algebraic multiplicity of an eigenvalue is the number of times it appears as a root of the characteristic polynomial
    • For example, if the characteristic polynomial is (ฮปโˆ’2)2(ฮปโˆ’3)(\lambda - 2)^2(\lambda - 3), then the eigenvalue 2 has algebraic multiplicity 2 and the eigenvalue 3 has algebraic multiplicity 1

Finding eigenvectors

  • For each distinct eigenvalue ฮป\lambda, find the corresponding eigenvectors by solving (Aโˆ’ฮปI)x=0(A - \lambda I)x = 0
    • This is equivalent to finding the nullspace of (Aโˆ’ฮปI)(A - \lambda I)
    • Non-trivial solutions to this system are the eigenvectors
  • The geometric multiplicity of an eigenvalue is the dimension of its eigenspace (the space of all its eigenvectors)
    • For example, if the eigenspace for ฮป=2\lambda = 2 is spanned by two linearly independent vectors, then the geometric multiplicity of 2 is 2
  • If the algebraic multiplicity of an eigenvalue is greater than 1, then the corresponding eigenvectors can be chosen to be any linearly independent set that spans the eigenspace

Eigenvalue and eigenvector matrices

Steps for diagonalization, linear algebra - Given a $4\times 4$ symmetric matrix, is there an efficient way to find its ...

Eigenvalue matrix

  • The eigenvalue matrix DD is a diagonal matrix constructed using the eigenvalues of AA
    • The eigenvalues are placed on the main diagonal of DD in any order
    • If an eigenvalue has algebraic multiplicity greater than 1, it will appear on the diagonal multiple times
    • All off-diagonal entries of DD are zero
  • For example, if the eigenvalues of AA are 2 (with multiplicity 2) and 3, then a possible eigenvalue matrix is: D=[200020003]D = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix}

Eigenvector matrix

  • The eigenvector matrix PP is formed by using the eigenvectors of AA as its columns
    • The eigenvectors should be in the same order as their corresponding eigenvalues in DD
    • Each column of PP is an eigenvector xix_i that satisfies Axi=ฮปixiAx_i = \lambda_i x_i, where ฮปi\lambda_i is the iith eigenvalue in the diagonal of DD
  • If an eigenvalue has algebraic multiplicity greater than 1, then the corresponding eigenvectors can be chosen to be any linearly independent set that spans the eigenspace
  • PP is invertible if and only if AA is diagonalizable
    • If PP is invertible, then Pโˆ’1AP=DP^{-1}AP = D

Matrix product representation

Diagonalization as matrix factorization

  • If AA is diagonalizable, it can be factored as A=PDPโˆ’1A = PDP^{-1}, where DD is the eigenvalue matrix and PP is the eigenvector matrix
    • This factorization is not unique, as the eigenvalues and eigenvectors can be ordered differently in DD and PP
    • If an eigenvalue has algebraic multiplicity greater than 1, then there are infinitely many choices for PP, but the factorization still holds
  • The columns of PP form a basis of Rn\mathbb{R}^n consisting of eigenvectors of AA
    • The factorization A=PDPโˆ’1A = PDP^{-1} can be viewed as a change of basis to one in which the transformation AA is represented by a diagonal matrix

Applications of diagonalization

  • Diagonalization can simplify calculations involving powers of AA
    • Ak=(PDPโˆ’1)k=PDkPโˆ’1A^k = (PDP^{-1})^k = PD^kP^{-1}, and powers of a diagonal matrix are easy to compute
    • For example, if D=[2003]D = \begin{bmatrix} 2 & 0 \\ 0 & 3 \end{bmatrix}, then D5=[250035]=[3200243]D^5 = \begin{bmatrix} 2^5 & 0 \\ 0 & 3^5 \end{bmatrix} = \begin{bmatrix} 32 & 0 \\ 0 & 243 \end{bmatrix}
  • Diagonalization can be used to solve systems of linear differential equations
    • If xโ€ฒ(t)=Ax(t)\mathbf{x}'(t) = A\mathbf{x}(t) and AA is diagonalizable, then the solution is x(t)=PeDtPโˆ’1x(0)\mathbf{x}(t) = Pe^{Dt}P^{-1}\mathbf{x}(0), where eDte^{Dt} is a diagonal matrix with eฮปite^{\lambda_i t} on the diagonal