Why This Matters
The rank of a matrix is one of the most powerful diagnostic tools in linear algebra—it tells you everything about what a matrix can and cannot do. When you're solving systems of equations, determining invertibility, or analyzing linear transformations, rank is your go-to concept. You're being tested on your ability to connect rank to linear independence, solution spaces, invertibility, and the structure of transformations, so understanding rank deeply will unlock multiple problem types on your exams.
Don't just memorize that "rank equals the number of linearly independent rows." Instead, know why rank matters: it reveals the dimension of the image, predicts the number of solutions to a system, and determines whether a matrix can be inverted. Every concept in this guide connects back to one core idea—rank measures the essential dimensionality of the information a matrix carries. Master that, and you've got this.
Foundational Definitions
These concepts establish what rank actually means and how it relates to the structure of a matrix. Rank quantifies the maximum number of "useful" directions a matrix can represent.
Definition of Matrix Rank
- The rank is the maximum number of linearly independent row or column vectors—these are the vectors that genuinely contribute new information to the matrix
- Rank equals the dimension of the column space (or row space)—the vector space spanned by the matrix's columns or rows
- Rank measures the "non-degenerateness" of a linear system—a higher rank means less redundancy in the equations
Relationship Between Rank and Linear Independence
- Linear independence means no vector is a combination of the others—if you can write one row as a sum of other rows, that row doesn't contribute to rank
- Rank counts exactly how many rows (or columns) are independent—redundant vectors get "collapsed" in the rank calculation
- If rank equals the number of rows, all rows are linearly independent—no row is wasted or duplicated
Compare: Definition of Rank vs. Linear Independence—both describe the same phenomenon from different angles. Rank gives you the number, while linear independence describes the property of the vectors. FRQs often ask you to explain why adding a dependent row doesn't change rank.
Computing Rank
These methods give you practical tools for finding rank. Row reduction reveals rank by eliminating redundancy systematically.
- Row echelon form (REF) reveals rank by counting non-zero rows—each non-zero row after reduction represents one independent direction
- Reduced row echelon form (RREF) makes pivot positions explicit—the number of pivots equals the rank
- For square matrices, a non-zero determinant means full rank—determinants provide a quick invertibility check without full row reduction
Rank and Matrix Dimensions
- Rank cannot exceed the smaller of the row or column count—written as rank(A)≤min(m,n) for an m×n matrix
- A "wide" matrix (more columns than rows) has max rank equal to row count—there simply aren't enough rows to span more dimensions
- Rank tells you the dimension of the image—how many dimensions the transformation can actually "reach"
Compare: REF vs. Determinants for finding rank—REF works for any matrix shape, while determinants only apply to square matrices. Use determinants for quick checks on square matrices; use row reduction when you need to handle rectangular matrices or find the actual null space.
The Rank-Nullity Connection
This theorem is the bridge between what a matrix "does" (its image) and what it "kills" (its kernel). The rank-nullity theorem partitions the domain into productive and nullified dimensions.
Rank-Nullity Theorem
- The theorem states rank(A)+nullity(A)=n where n is the number of columns—this is non-negotiable and always holds
- Nullity counts free variables in the solution to Ax=0—the dimension of the kernel or null space
- This links image dimension to kernel dimension—if rank goes up, nullity must go down, and vice versa
Applications in Solving Systems of Equations
- A system is consistent if rank(A)=rank([A∣b])—the augmented matrix shouldn't have "extra" independent information
- Full rank means a unique solution; lower rank means infinitely many or none—the gap between rank and column count tells you how many free variables exist
- Rank identifies dependencies among equations—redundant equations don't add constraints, they just clutter the system
Compare: Rank vs. Nullity—they're two sides of the same coin. Rank measures what the transformation preserves; nullity measures what it destroys. If an exam asks about free variables, think nullity. If it asks about the dimension of the output space, think rank.
Rank and Matrix Properties
These concepts connect rank to important structural properties like invertibility and transformation behavior.
Full Rank Matrices
- Full rank means rank(A)=min(m,n)—every row and column contributes maximally to the span
- No redundant rows or columns exist—the matrix carries the maximum possible information for its size
- Full rank square matrices are invertible—this is the key test for whether A−1 exists
Rank and Invertibility of Matrices
- A square matrix is invertible if and only if it has full rank—rank(A)=n for an n×n matrix
- Rank less than n means a non-trivial null space exists—there's a non-zero vector x where Ax=0, blocking invertibility
- Rank provides a quick singularity check—if you find even one dependent row, the matrix is singular
Compare: Full Rank vs. Invertibility—full rank is the condition, invertibility is the consequence. For non-square matrices, full rank doesn't imply invertibility (you need a square matrix for that), but it does tell you about injectivity or surjectivity.
These concepts show how rank behaves under composition and what it reveals about linear maps.
- Rank equals the dimension of the image (range) of the transformation—it tells you the dimensionality of possible outputs
- Higher rank means the transformation reaches more dimensions—a rank-3 matrix can map into a 3D subspace at most
- Rank determines injectivity and surjectivity—full column rank implies injective (one-to-one); full row rank implies surjective (onto)
Rank of Matrix Products
- rank(AB)≤min(rank(A),rank(B))—composing transformations can only lose dimensions, never gain them
- This explains how transformations "bottleneck" each other—if B squashes to 2D, then AB can't exceed 2D regardless of A
- A rank-deficient factor forces a rank-deficient product—the weakest link determines the maximum output dimension
Compare: Rank of A vs. Rank of AB—the product's rank is bounded by both factors. This is crucial for understanding why composing a projection with any matrix still gives you a projection-like result. Exam tip: if asked why rank(AB)<rank(A), explain the bottleneck effect.
Quick Reference Table
|
| Definition of Rank | Max linearly independent rows/columns, dimension of column space |
| Computing Rank | Count non-zero rows in REF/RREF, or check if determinant =0 |
| Rank-Nullity Theorem | rank(A)+nullity(A)=n (number of columns) |
| Full Rank | rank=min(m,n), no redundancy, square case implies invertible |
| Invertibility | Square matrix invertible ⟺ full rank ⟺ det(A)=0 |
| System Consistency | Consistent if rank(A)=rank([A∥b]) |
| Transformation Image | dim(image)=rank(A) |
| Product Rank | rank(AB)≤min(rank(A),rank(B)) |
Self-Check Questions
-
If a 4×6 matrix has rank 3, what is its nullity? How many free variables would appear in the solution to Ax=0?
-
Compare and contrast: What do full column rank and full row rank each tell you about a linear transformation's injectivity and surjectivity?
-
A system Ax=b has coefficient matrix with rank 2 and augmented matrix with rank 3. Is the system consistent? Explain using the rank condition.
-
If rank(A)=5 and rank(B)=3, what can you conclude about rank(AB)? Why can't the product have rank 4?
-
Two 3×3 matrices both have rank 2. Which properties do they share, and how might they differ? Could one be invertible while the other isn't?