Fiveable

๐Ÿ“ˆLinear Algebra 101 Unit 1 Review

QR code for Linear Algebra 101 practice questions

1.2 Explain inverse of a matrix

1.2 Explain inverse of a matrix

Written by the Fiveable Content Team โ€ข Last updated August 2025
Written by the Fiveable Content Team โ€ข Last updated August 2025

Matrix inverses are a crucial concept in linear algebra. They allow us to undo matrix operations and solve complex systems of equations. Understanding inverses helps us grasp how matrices transform space and how to reverse those transformations.

Inverses have unique properties that make them powerful tools. They're used in various fields, from computer graphics to economics. Knowing how to calculate and use inverses is essential for solving real-world problems involving linear systems and transformations.

Matrix Inverses and Properties

Definition and Properties of Matrix Inverses

  • The inverse of a square matrix AA, denoted as Aโˆ’1A^{-1}, is another square matrix such that Aโˆ—Aโˆ’1=Aโˆ’1โˆ—A=IA * A^{-1} = A^{-1} * A = I, where II is the identity matrix
  • If AA is an invertible matrix, then Aโˆ’1A^{-1} is unique
  • The inverse of a matrix, if it exists, has the same dimensions as the original matrix
  • The product of a matrix and its inverse is commutative, meaning Aโˆ—Aโˆ’1=Aโˆ’1โˆ—AA * A^{-1} = A^{-1} * A
  • The inverse of the inverse of a matrix AA is the matrix AA itself, i.e., (Aโˆ’1)โˆ’1=A(A^{-1})^{-1} = A

Properties of Matrix Products and Inverses

  • The inverse of a product of matrices is equal to the product of their inverses in reverse order, i.e., (AB)โˆ’1=Bโˆ’1โˆ—Aโˆ’1(AB)^{-1} = B^{-1} * A^{-1}
    • For example, if A=[1234]A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} and B=[5678]B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}, then (AB)โˆ’1=Bโˆ’1โˆ—Aโˆ’1(AB)^{-1} = B^{-1} * A^{-1}
  • The inverse of a diagonal matrix is obtained by reciprocating each non-zero diagonal element
    • For example, if D=[2003]D = \begin{bmatrix} 2 & 0 \\ 0 & 3 \end{bmatrix}, then Dโˆ’1=[120013]D^{-1} = \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{3} \end{bmatrix}
  • The inverse of an orthogonal matrix is equal to its transpose, i.e., Qโˆ’1=QTQ^{-1} = Q^T for an orthogonal matrix QQ

Conditions for Matrix Invertibility

Nonsingularity and Invertibility

  • A square matrix AA has an inverse if and only if it is nonsingular or invertible
  • A matrix is invertible if and only if its determinant is not equal to zero, i.e., detโก(A)โ‰ 0\det(A) \neq 0
    • For example, if A=[1234]A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, then detโก(A)=โˆ’2โ‰ 0\det(A) = -2 \neq 0, so AA is invertible
  • A matrix is invertible if and only if its rank is equal to its dimension, i.e., rank(A)=n\text{rank}(A) = n for an nร—nn \times n matrix
Definition and Properties of Matrix Inverses, 3.6b. Examples โ€“ Inverses of Matrices | Finite Math

Linear Independence and Invertibility

  • A matrix is invertible if and only if it has linearly independent columns or rows
    • For example, if A=[1236]A = \begin{bmatrix} 1 & 2 \\ 3 & 6 \end{bmatrix}, then its columns are linearly dependent (the second column is a multiple of the first), so AA is not invertible
  • If a matrix AA is invertible, then the linear transformation represented by AA is bijective (one-to-one and onto)
  • A square matrix AA is invertible if and only if the equation Ax=0Ax = 0 has only the trivial solution x=0x = 0

Calculating Matrix Inverses

Gaussian Elimination Method

  • Gaussian elimination method: Augment the matrix AA with the identity matrix II and perform row operations to transform AA into II. The resulting right-hand side will be the inverse of AA
    • For example, to find the inverse of A=[1234]A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, augment AA with II to get [12103401]\begin{bmatrix} 1 & 2 & 1 & 0 \\ 3 & 4 & 0 & 1 \end{bmatrix}, then perform row operations to obtain [10โˆ’210132โˆ’12]\begin{bmatrix} 1 & 0 & -2 & 1 \\ 0 & 1 & \frac{3}{2} & -\frac{1}{2} \end{bmatrix}, so Aโˆ’1=[โˆ’2132โˆ’12]A^{-1} = \begin{bmatrix} -2 & 1 \\ \frac{3}{2} & -\frac{1}{2} \end{bmatrix}
  • This method is generally applicable to any square invertible matrix

Adjugate and Cofactor Methods

  • Adjugate method (Cramer's rule): Aโˆ’1=1detโก(A)โˆ—adj(A)A^{-1} = \frac{1}{\det(A)} * \text{adj}(A), where adj(A)\text{adj}(A) is the adjugate matrix of AA, obtained by transposing the cofactor matrix of AA
  • Cofactor expansion method: Compute the cofactor matrix of AA, transpose it, and divide each element by the determinant of AA
    • For example, to find the inverse of A=[1234]A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, first compute the cofactor matrix C=[4โˆ’2โˆ’31]C = \begin{bmatrix} 4 & -2 \\ -3 & 1 \end{bmatrix}, then adj(A)=CT=[4โˆ’3โˆ’21]\text{adj}(A) = C^T = \begin{bmatrix} 4 & -3 \\ -2 & 1 \end{bmatrix}, and finally Aโˆ’1=1detโก(A)โˆ—adj(A)=1โˆ’2โˆ—[4โˆ’3โˆ’21]=[โˆ’2321โˆ’12]A^{-1} = \frac{1}{\det(A)} * \text{adj}(A) = \frac{1}{-2} * \begin{bmatrix} 4 & -3 \\ -2 & 1 \end{bmatrix} = \begin{bmatrix} -2 & \frac{3}{2} \\ 1 & -\frac{1}{2} \end{bmatrix}
  • These methods are more computationally expensive than Gaussian elimination but can be useful for theoretical purposes
Definition and Properties of Matrix Inverses, 3.6b. Examples โ€“ Inverses of Matrices | Finite Math

Other Methods

  • For 2ร—22 \times 2 matrices, the inverse can be calculated using the formula: Aโˆ’1=1detโก(A)โˆ—[dโˆ’bโˆ’ca]A^{-1} = \frac{1}{\det(A)} * \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}, where A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}
    • For example, if A=[1234]A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, then Aโˆ’1=1โˆ’2โˆ—[4โˆ’2โˆ’31]=[โˆ’2132โˆ’12]A^{-1} = \frac{1}{-2} * \begin{bmatrix} 4 & -2 \\ -3 & 1 \end{bmatrix} = \begin{bmatrix} -2 & 1 \\ \frac{3}{2} & -\frac{1}{2} \end{bmatrix}
  • Singular Value Decomposition (SVD) method: Decompose the matrix AA into the product of three matrices, A=Uโˆ—ฮฃโˆ—VTA = U * \Sigma * V^T, and then compute Aโˆ’1=Vโˆ—ฮฃโˆ’1โˆ—UTA^{-1} = V * \Sigma^{-1} * U^T, where ฮฃโˆ’1\Sigma^{-1} is obtained by reciprocating the non-zero singular values
    • This method is numerically stable and can be used for both square and rectangular matrices

Solving Linear Systems with Matrix Inversion

Solving Systems of Linear Equations

  • A system of linear equations Ax=bAx = b, where AA is an invertible matrix, can be solved by multiplying both sides by Aโˆ’1A^{-1}, resulting in x=Aโˆ’1โˆ—bx = A^{-1} * b
    • For example, consider the system of equations: {x+2y=53x+4y=11\begin{cases} x + 2y = 5 \\ 3x + 4y = 11 \end{cases} This can be written in matrix form as [1234][xy]=[511]\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 5 \\ 11 \end{bmatrix} Multiplying both sides by Aโˆ’1=[โˆ’2132โˆ’12]A^{-1} = \begin{bmatrix} -2 & 1 \\ \frac{3}{2} & -\frac{1}{2} \end{bmatrix}, we get: [xy]=[โˆ’2132โˆ’12][511]=[12]\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} -2 & 1 \\ \frac{3}{2} & -\frac{1}{2} \end{bmatrix} \begin{bmatrix} 5 \\ 11 \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} So, the solution is x=1x = 1 and y=2y = 2
  • The solution obtained using matrix inversion is unique because the inverse of an invertible matrix is unique

Least-Squares Solution and Computational Considerations

  • Matrix inversion can be used to find the least-squares solution to an overdetermined system of linear equations
    • For example, consider the system Ax=bAx = b where AA is an mร—nm \times n matrix with m>nm > n. The least-squares solution minimizes the Euclidean norm of the residual vector r=bโˆ’Axr = b - Ax and is given by x=(ATA)โˆ’1ATbx = (A^T A)^{-1} A^T b
  • In practice, solving systems of linear equations using matrix inversion can be computationally expensive for large matrices, and other methods like LU decomposition or iterative methods may be preferred
    • For example, Gaussian elimination with partial pivoting (LU decomposition) has a time complexity of O(n3)O(n^3) for an nร—nn \times n matrix, while matrix inversion using the same method has a complexity of O(n3)O(n^3) plus the cost of matrix multiplication, which is also O(n3)O(n^3) in the naive implementation