upgrade
upgrade

๐ŸงฎPhysical Sciences Math Tools

Matrix Algebra Rules

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Matrix algebra isn't just abstract symbol manipulationโ€”it's the language that physical scientists use to describe everything from quantum mechanical states to coordinate transformations to systems of coupled differential equations. When you're tested on these rules, you're really being tested on whether you understand how linear transformations work and why certain operations preserve (or reveal) physical properties of systems.

The rules in this guide break down into a few core ideas: how matrices combine, what special matrices do, and how we extract meaningful information from matrices. Don't just memorize that matrix multiplication isn't commutativeโ€”understand why order matters when you're chaining transformations. Know which properties (like the trace) stay constant under similarity transformations, because that's how physicists identify quantities that don't depend on your choice of basis. Master these conceptual threads, and the individual rules will stick.


Basic Operations: How Matrices Combine

These operations form the foundation of everything else. Addition and scalar multiplication make matrices a vector space; matrix multiplication encodes composition of linear transformations.

Matrix Addition and Subtraction

  • Same dimensions requiredโ€”you can only add matrices that match in both rows and columns, just like adding vectors of the same length
  • Element-wise operation means (A+B)ij=Aij+Bij(A + B)_{ij} = A_{ij} + B_{ij}; no mixing of different positions occurs
  • Commutative and associativeโ€”unlike multiplication, order doesn't matter: A+B=B+AA + B = B + A

Scalar Multiplication

  • Every element scales uniformlyโ€”multiplying matrix AA by scalar cc gives (cA)ij=cโ‹…Aij(cA)_{ij} = c \cdot A_{ij}
  • Dimensions unchanged but the "magnitude" of the transformation changes; think of stretching or compressing
  • Distributes over addition: c(A+B)=cA+cBc(A + B) = cA + cB, which is essential for linearity arguments

Matrix Multiplication

  • Dimension compatibility rule: for ABAB to exist, AA must be mร—nm \times n and BB must be nร—pn \times p, giving result mร—pm \times p
  • Row-column dot product defines each element: (AB)ij=โˆ‘kAikBkj(AB)_{ij} = \sum_{k} A_{ik}B_{kj}
  • Non-commutativeโ€”ABโ‰ BAAB \neq BA in general, because applying transformation AA then BB differs from BB then AA

Compare: Matrix addition vs. matrix multiplicationโ€”both combine matrices, but addition is element-wise and commutative while multiplication encodes transformation composition and is non-commutative. If an FRQ asks why order matters in a sequence of rotations, multiplication's non-commutativity is your answer.


Structural Operations: Rearranging and Reshaping

These operations change how we view a matrix without changing its fundamental information content. Transposition swaps the role of rows and columns, which matters for inner products and adjoint operators.

Matrix Transposition

  • Rows become columnsโ€”the transpose ATA^T satisfies (AT)ij=Aji(A^T)_{ij} = A_{ji}
  • Dimensions flip: an mร—nm \times n matrix becomes nร—mn \times m after transposition
  • Key identity for products: (AB)T=BTAT(AB)^T = B^T A^Tโ€”the order reverses, which frequently appears in derivations

Trace of a Matrix

  • Sum of diagonal elements: Tr(A)=โˆ‘iAii\text{Tr}(A) = \sum_{i} A_{ii} for any square matrix
  • Cyclic invarianceโ€”Tr(ABC)=Tr(CAB)=Tr(BCA)\text{Tr}(ABC) = \text{Tr}(CAB) = \text{Tr}(BCA), a property physicists exploit constantly
  • Basis-independent because trace equals the sum of eigenvalues; it's a similarity invariant

Compare: Transposition vs. traceโ€”both involve the diagonal in some sense, but transposition is a structural operation that changes the matrix, while trace extracts a single number that remains invariant under basis changes. Trace is your go-to when you need a coordinate-independent quantity.


Special Matrices: Identity and Inverse

These matrices play the role of "1" and "1/x" from regular algebra. The identity does nothing; the inverse undoes a transformation.

Identity Matrix Properties

  • Ones on diagonal, zeros elsewhereโ€”denoted II or InI_n for the nร—nn \times n case
  • Multiplicative neutral element: AI=IA=AAI = IA = A for any compatible matrix AA
  • Eigenvalues are all 1โ€”every vector is an eigenvector, which is why II represents "no transformation"

Inverse Matrix Properties

  • Defining equation: AAโˆ’1=Aโˆ’1A=IAA^{-1} = A^{-1}A = I, meaning the inverse "undoes" the original transformation
  • Exists only for square matrices with non-zero determinantโ€”singular matrices (detโก(A)=0\det(A) = 0) have no inverse
  • Product rule: (AB)โˆ’1=Bโˆ’1Aโˆ’1(AB)^{-1} = B^{-1}A^{-1}โ€”order reverses, just like with transposes

Compare: Identity vs. inverseโ€”the identity leaves everything unchanged (like multiplying by 1), while the inverse reverses a transformation (like dividing by a number). Remember: checking if detโก(A)โ‰ 0\det(A) \neq 0 is the fast way to confirm invertibility.


Matrix Invariants: Extracting Key Information

These quantities tell you about the matrix's behavior without requiring you to apply it to specific vectors. Determinants measure volume scaling; eigenvalues reveal stretching along special directions.

Determinant Calculation

  • Invertibility test: detโก(A)โ‰ 0\det(A) \neq 0 if and only if AA is invertible
  • 2ร—2 formula: for A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, we have detโก(A)=adโˆ’bc\det(A) = ad - bc
  • Geometric meaningโ€”the determinant measures how the transformation scales volumes (negative means orientation flip)

Eigenvalues and Eigenvectors

  • Defining equation: Av=ฮปvA\mathbf{v} = \lambda\mathbf{v}, where ฮป\lambda is the eigenvalue and v\mathbf{v} is the eigenvector
  • Characteristic polynomial: eigenvalues are roots of detโก(Aโˆ’ฮปI)=0\det(A - \lambda I) = 0
  • Physical interpretationโ€”eigenvectors are directions that only get stretched (by factor ฮป\lambda), not rotated

Compare: Determinant vs. eigenvaluesโ€”the determinant equals the product of all eigenvalues, while the trace equals their sum. Both are similarity invariants, but eigenvalues give you the full picture of how the transformation stretches space along its principal directions.


Advanced Structure: Diagonalization

Diagonalization is the payoff for understanding eigenvalues. When possible, it lets you express a matrix in its simplest formโ€”pure stretching along eigenvector directions.

Matrix Diagonalization

  • Decomposition form: A=PDPโˆ’1A = PDP^{-1}, where DD is diagonal (eigenvalues on diagonal) and PP contains eigenvectors as columns
  • Requires enough eigenvectorsโ€”AA is diagonalizable only if it has nn linearly independent eigenvectors for an nร—nn \times n matrix
  • Powers become trivial: Ak=PDkPโˆ’1A^k = PD^kP^{-1}, and raising a diagonal matrix to a power just raises each diagonal entry to that power

Compare: Diagonalization vs. finding eigenvaluesโ€”computing eigenvalues is always possible (just solve the characteristic polynomial), but diagonalization requires the additional step of having a complete set of independent eigenvectors. Defective matrices have eigenvalues but can't be diagonalized.


Quick Reference Table

ConceptBest Examples
Element-wise operationsAddition, subtraction, scalar multiplication
Composition of transformationsMatrix multiplication
Order reversal rules(AB)T=BTAT(AB)^T = B^T A^T, (AB)โˆ’1=Bโˆ’1Aโˆ’1(AB)^{-1} = B^{-1}A^{-1}
Similarity invariantsTrace, determinant, eigenvalues
Invertibility conditionsdetโก(A)โ‰ 0\det(A) \neq 0, square matrix required
Geometric interpretationDeterminant (volume), eigenvalues (stretching)
Simplifying matrix powersDiagonalization: Ak=PDkPโˆ’1A^k = PD^kP^{-1}

Self-Check Questions

  1. Which two operations follow an "order reversal" rule when applied to products, and why does this pattern occur?

  2. You're given a 3ร—33 \times 3 matrix and told its trace is 6 and its determinant is 8. What can you conclude about the sum and product of its eigenvalues?

  3. Compare and contrast the conditions required for matrix addition versus matrix multiplication. Why are the requirements different?

  4. A matrix has detโก(A)=0\det(A) = 0. What does this tell you about (a) its invertibility, (b) at least one of its eigenvalues, and (c) the geometric effect of the transformation?

  5. If an FRQ asks you to compute A100A^{100} for a given matrix, what technique should you reach for first, and what could prevent that technique from working?