Vector spaces aren't just abstract mathematical structures—they're the foundation for nearly everything you'll encounter in linear algebra and beyond. When you study differential equations, computer graphics, quantum mechanics, or data science, you're working with vector spaces whether you realize it or not. The concepts here—spanning, independence, dimension, and transformations—give you the language to describe how mathematical objects behave and interact.
Here's what you're really being tested on: can you recognize why certain vectors form a basis, how transformations preserve structure, and what properties make a subspace valid? Don't just memorize definitions—know what concept each item illustrates and how they connect. If you understand that a basis is the "minimal spanning set" and dimension tells you the "degrees of freedom," you'll crush both computational problems and conceptual questions.
Foundational Structures
Every vector space problem starts with understanding what makes a space valid and how smaller spaces live inside larger ones. The key insight is that vector spaces are defined by their behavior under two operations: addition and scalar multiplication.
Definition of a Vector Space
Eight axioms define validity—closure under addition and scalar multiplication, plus associativity, commutativity, identity elements, and inverses
Scalars come from a field (usually R or C), which determines what "multiplication by a scalar" means
Examples span from concrete to abstract—Rn, polynomial spaces Pn, and continuous function spaces C[a,b] all qualify
Subspaces
Three conditions to verify—contains the zero vector, closed under addition, closed under scalar multiplication (the "subspace test")
Always passes through the origin—a plane in R3 is only a subspace if it contains 0
Intersection of subspaces is always a subspace, but union generally is not—this is a common exam trap
Compare: Vector Space vs. Subspace—both satisfy the same axioms, but a subspace inherits its operations from the parent space. If an FRQ asks you to prove something is a subspace, use the three-condition test, not all eight axioms.
Independence and Spanning
These twin concepts determine whether you have "enough" vectors and whether you have "too many." Linear independence means no redundancy; span means complete coverage.
Linear Independence and Dependence
Test via the equationc1v1+c2v2+⋯+cnvn=0—if only the trivial solution exists, the set is independent
Geometric interpretation—independent vectors point in "genuinely different" directions; dependent vectors are redundant
In Rn, you cannot have more than n linearly independent vectors—this limits basis size
Span of Vectors
Span = all linear combinations—span{v1,v2,…,vk}={c1v1+⋯+ckvk:ci∈R}
Adding vectors can only expand or maintain span—never shrinks it
Span of a single nonzero vector in R3 is a line; span of two independent vectors is a plane
Basis and Dimension
Basis = linearly independent spanning set—the minimal set that generates the entire space
Dimension = number of basis vectors—dim(Rn)=n, dim(P2)=3 (polynomials up to degree 2)
All bases of a space have the same size—this is why dimension is well-defined and so powerful
Compare: Span vs. Basis—span tells you what you can reach; basis tells you the most efficient way to reach everything. A spanning set might have redundant vectors, but a basis never does.
Linear Transformations and Their Properties
Transformations are functions between vector spaces that "play nice" with the structure. The preservation of addition and scalar multiplication is what makes them linear.
Linear Transformations
Two properties define linearity—T(u+v)=T(u)+T(v) and T(cv)=cT(v)
Matrix representation—every linear transformation T:Rn→Rm corresponds to an m×n matrix
Composition of transformations equals matrix multiplication—order matters!
Null Space and Range
Null space (kernel) = {v:T(v)=0}—measures what information the transformation "destroys"
Range (image) = {T(v):v∈V}—measures what outputs are actually achievable
Rank-Nullity Theorem—dim(null T)+dim(range T)=dim(domain), your most powerful dimension-counting tool
Compare: Null Space vs. Range—null space lives in the domain, range lives in the codomain. A transformation is injective (one-to-one) iff null space = {0}; it's surjective (onto) iff range = entire codomain.
Eigentheory
Eigenvectors reveal the "natural directions" of a transformation—directions that get stretched or compressed but not rotated. This is where linear algebra meets dynamics and differential equations.
Eigenvalues and Eigenvectors
Defining equation—Av=λv where v=0; the eigenvector v only scales by factor λ
Finding eigenvalues—solve det(A−λI)=0 (the characteristic equation)
Applications everywhere—stability analysis, principal component analysis, solving x′=Ax in differential equations
Compare: Null Space vs. Eigenspace—the null space of A is the eigenspace for λ=0. If λ=0 is an eigenvalue, the matrix is singular (non-invertible).
Inner Product Structures
Inner products add geometry to algebra—suddenly you can talk about lengths, angles, and perpendicularity. The inner product generalizes the dot product to abstract spaces.
Inner Product Spaces
Inner product axioms—linearity in first argument, symmetry (or conjugate symmetry for C), positive definiteness
Induces a norm—∥v∥=⟨v,v⟩ gives vector "length"
Standard example—Rn with dot product: ⟨u,v⟩=u⋅v=∑uivi
Orthogonality and Orthonormal Bases
Orthogonal means perpendicular—⟨u,v⟩=0; orthonormal adds ∥v∥=1 for each vector
Gram-Schmidt process—algorithm to convert any basis into an orthonormal basis
Projection formula simplifies—with orthonormal basis {ei}, coordinates are just ci=⟨v,ei⟩
Compare: Orthogonal vs. Orthonormal—both involve perpendicularity, but orthonormal vectors are also unit length. Orthonormal bases make coefficient calculations trivial—no matrix inversion needed.
Quick Reference Table
Concept
Best Examples
Subspace verification
Zero vector test, closure under addition/scalar multiplication
Linear independence
Trivial solution test, determinant ≠ 0 for square systems
Basis construction
Standard basis {e1,…,en}, polynomial basis {1,x,x2}
Dimension counting
Rank-Nullity Theorem applications
Transformation properties
Kernel for injectivity, range for surjectivity
Eigenvalue computation
Characteristic equation det(A−λI)=0
Orthogonalization
Gram-Schmidt process
Projection
projuv=⟨u,u⟩⟨v,u⟩u
Self-Check Questions
If a set of vectors spans R4 but contains 5 vectors, what can you conclude about their linear independence? Why?
Compare and contrast the null space and range of a linear transformation. How does the Rank-Nullity Theorem connect them?
A subspace of R3 has dimension 2. What geometric object does it represent, and what must be true about its relationship to the origin?
Given a matrix with eigenvalue λ=0, what can you immediately conclude about the matrix's invertibility and null space?
Why does an orthonormal basis make finding the coordinates of a vector so much simpler than an arbitrary basis? What formula would you use?