upgrade
upgrade

Key Concepts of Rank of a Matrix

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

The rank of a matrix is one of the most powerful diagnostic tools in linear algebra—it tells you everything about what a matrix can and cannot do. When you're solving systems of equations, determining invertibility, or analyzing linear transformations, rank is your go-to concept. You're being tested on your ability to connect rank to linear independence, solution spaces, invertibility, and the structure of transformations, so understanding rank deeply will unlock multiple problem types on your exams.

Don't just memorize that "rank equals the number of linearly independent rows." Instead, know why rank matters: it reveals the dimension of the image, predicts the number of solutions to a system, and determines whether a matrix can be inverted. Every concept in this guide connects back to one core idea—rank measures the essential dimensionality of the information a matrix carries. Master that, and you've got this.


Foundational Definitions

These concepts establish what rank actually means and how it relates to the structure of a matrix. Rank quantifies the maximum number of "useful" directions a matrix can represent.

Definition of Matrix Rank

  • The rank is the maximum number of linearly independent row or column vectors—these are the vectors that genuinely contribute new information to the matrix
  • Rank equals the dimension of the column space (or row space)the vector space spanned by the matrix's columns or rows
  • Rank measures the "non-degenerateness" of a linear system—a higher rank means less redundancy in the equations

Relationship Between Rank and Linear Independence

  • Linear independence means no vector is a combination of the others—if you can write one row as a sum of other rows, that row doesn't contribute to rank
  • Rank counts exactly how many rows (or columns) are independent—redundant vectors get "collapsed" in the rank calculation
  • If rank equals the number of rows, all rows are linearly independent—no row is wasted or duplicated

Compare: Definition of Rank vs. Linear Independence—both describe the same phenomenon from different angles. Rank gives you the number, while linear independence describes the property of the vectors. FRQs often ask you to explain why adding a dependent row doesn't change rank.


Computing Rank

These methods give you practical tools for finding rank. Row reduction reveals rank by eliminating redundancy systematically.

Methods to Calculate Rank (Row Echelon Form, Determinants)

  • Row echelon form (REF) reveals rank by counting non-zero rows—each non-zero row after reduction represents one independent direction
  • Reduced row echelon form (RREF) makes pivot positions explicitthe number of pivots equals the rank
  • For square matrices, a non-zero determinant means full rank—determinants provide a quick invertibility check without full row reduction

Rank and Matrix Dimensions

  • Rank cannot exceed the smaller of the row or column count—written as rank(A)min(m,n)\text{rank}(A) \leq \min(m, n) for an m×nm \times n matrix
  • A "wide" matrix (more columns than rows) has max rank equal to row count—there simply aren't enough rows to span more dimensions
  • Rank tells you the dimension of the imagehow many dimensions the transformation can actually "reach"

Compare: REF vs. Determinants for finding rank—REF works for any matrix shape, while determinants only apply to square matrices. Use determinants for quick checks on square matrices; use row reduction when you need to handle rectangular matrices or find the actual null space.


The Rank-Nullity Connection

This theorem is the bridge between what a matrix "does" (its image) and what it "kills" (its kernel). The rank-nullity theorem partitions the domain into productive and nullified dimensions.

Rank-Nullity Theorem

  • The theorem states rank(A)+nullity(A)=n\text{rank}(A) + \text{nullity}(A) = n where nn is the number of columns—this is non-negotiable and always holds
  • Nullity counts free variables in the solution to Ax=0Ax = 0the dimension of the kernel or null space
  • This links image dimension to kernel dimension—if rank goes up, nullity must go down, and vice versa

Applications in Solving Systems of Equations

  • A system is consistent if rank(A)=rank([Ab])\text{rank}(A) = \text{rank}([A|b])—the augmented matrix shouldn't have "extra" independent information
  • Full rank means a unique solution; lower rank means infinitely many or none—the gap between rank and column count tells you how many free variables exist
  • Rank identifies dependencies among equations—redundant equations don't add constraints, they just clutter the system

Compare: Rank vs. Nullity—they're two sides of the same coin. Rank measures what the transformation preserves; nullity measures what it destroys. If an exam asks about free variables, think nullity. If it asks about the dimension of the output space, think rank.


Rank and Matrix Properties

These concepts connect rank to important structural properties like invertibility and transformation behavior.

Full Rank Matrices

  • Full rank means rank(A)=min(m,n)\text{rank}(A) = \min(m, n)—every row and column contributes maximally to the span
  • No redundant rows or columns exist—the matrix carries the maximum possible information for its size
  • Full rank square matrices are invertible—this is the key test for whether A1A^{-1} exists

Rank and Invertibility of Matrices

  • A square matrix is invertible if and only if it has full rankrank(A)=n\text{rank}(A) = n for an n×nn \times n matrix
  • Rank less than nn means a non-trivial null space existsthere's a non-zero vector xx where Ax=0Ax = 0, blocking invertibility
  • Rank provides a quick singularity check—if you find even one dependent row, the matrix is singular

Compare: Full Rank vs. Invertibility—full rank is the condition, invertibility is the consequence. For non-square matrices, full rank doesn't imply invertibility (you need a square matrix for that), but it does tell you about injectivity or surjectivity.


Rank in Transformations and Products

These concepts show how rank behaves under composition and what it reveals about linear maps.

Rank and Linear Transformations

  • Rank equals the dimension of the image (range) of the transformation—it tells you the dimensionality of possible outputs
  • Higher rank means the transformation reaches more dimensions—a rank-3 matrix can map into a 3D subspace at most
  • Rank determines injectivity and surjectivityfull column rank implies injective (one-to-one); full row rank implies surjective (onto)

Rank of Matrix Products

  • rank(AB)min(rank(A),rank(B))\text{rank}(AB) \leq \min(\text{rank}(A), \text{rank}(B))—composing transformations can only lose dimensions, never gain them
  • This explains how transformations "bottleneck" each other—if BB squashes to 2D, then ABAB can't exceed 2D regardless of AA
  • A rank-deficient factor forces a rank-deficient product—the weakest link determines the maximum output dimension

Compare: Rank of AA vs. Rank of ABAB—the product's rank is bounded by both factors. This is crucial for understanding why composing a projection with any matrix still gives you a projection-like result. Exam tip: if asked why rank(AB)<rank(A)\text{rank}(AB) < \text{rank}(A), explain the bottleneck effect.


Quick Reference Table

ConceptKey Facts
Definition of RankMax linearly independent rows/columns, dimension of column space
Computing RankCount non-zero rows in REF/RREF, or check if determinant 0\neq 0
Rank-Nullity Theoremrank(A)+nullity(A)=n\text{rank}(A) + \text{nullity}(A) = n (number of columns)
Full Rankrank=min(m,n)\text{rank} = \min(m,n), no redundancy, square case implies invertible
InvertibilitySquare matrix invertible     \iff full rank     \iff det(A)0\det(A) \neq 0
System ConsistencyConsistent if rank(A)=rank([Ab])\text{rank}(A) = \text{rank}([A\|b])
Transformation Imagedim(image)=rank(A)\dim(\text{image}) = \text{rank}(A)
Product Rankrank(AB)min(rank(A),rank(B))\text{rank}(AB) \leq \min(\text{rank}(A), \text{rank}(B))

Self-Check Questions

  1. If a 4×64 \times 6 matrix has rank 3, what is its nullity? How many free variables would appear in the solution to Ax=0Ax = 0?

  2. Compare and contrast: What do full column rank and full row rank each tell you about a linear transformation's injectivity and surjectivity?

  3. A system Ax=bAx = b has coefficient matrix with rank 2 and augmented matrix with rank 3. Is the system consistent? Explain using the rank condition.

  4. If rank(A)=5\text{rank}(A) = 5 and rank(B)=3\text{rank}(B) = 3, what can you conclude about rank(AB)\text{rank}(AB)? Why can't the product have rank 4?

  5. Two 3×33 \times 3 matrices both have rank 2. Which properties do they share, and how might they differ? Could one be invertible while the other isn't?