Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Vector spaces are the foundation for nearly everything you'll encounter in linear algebra and beyond. Differential equations, computer graphics, quantum mechanics, and data science all rely on vector spaces. The concepts here, spanning, independence, dimension, and transformations, give you the language to describe how mathematical objects behave and interact.
What you're really being tested on: can you recognize why certain vectors form a basis, how transformations preserve structure, and what properties make a subspace valid? Don't just memorize definitions. Know what each concept illustrates and how they connect. A basis is the "minimal spanning set," and dimension tells you the "degrees of freedom" in a space. Understanding those ideas will carry you through both computational problems and conceptual questions.
Every vector space problem starts with understanding what makes a space valid and how smaller spaces live inside larger ones. Vector spaces are defined entirely by their behavior under two operations: addition and scalar multiplication.
A vector space is a set equipped with vector addition and scalar multiplication that satisfies eight axioms. These include closure under both operations, associativity and commutativity of addition, existence of an additive identity (the zero vector) and additive inverses, and compatibility of scalar multiplication with field operations (distributive laws, multiplicative identity).
The eight axioms might feel like a lot, but most of the time you're working in spaces where they're clearly satisfied. Where they really matter is when you need to show something isn't a vector space (for instance, a set where addition doesn't stay inside the set).
A subspace is a subset of a vector space that is itself a vector space under the same operations. Rather than checking all eight axioms, you only need three conditions (the subspace test):
A subspace always passes through the origin. A plane in is only a subspace if it contains . A plane that's been shifted away from the origin fails the test.
One common exam trap: the intersection of two subspaces is always a subspace, but the union generally is not. Think about two different lines through the origin in . Their union contains vectors from both lines, but adding a vector from one line to a vector from the other can land you outside both lines.
Compare: Vector Space vs. Subspace: both satisfy the same axioms, but a subspace inherits its operations from the parent space. If you're asked to prove something is a subspace, use the three-condition test, not all eight axioms.
These twin concepts determine whether you have "enough" vectors and whether you have "too many." Linear independence means no redundancy; span means complete coverage.
A set of vectors is linearly independent if the only solution to
is (the trivial solution). If any nontrivial solution exists, the set is linearly dependent, meaning at least one vector can be written as a combination of the others.
To test independence in practice, set up the equation above, form the corresponding matrix, and row reduce. If every column has a pivot, the set is independent.
The span of a set of vectors is the collection of all possible linear combinations of those vectors:
A basis for a vector space is a set of vectors that is both linearly independent and spans . It's the minimal spanning set: remove any vector and you lose coverage; add any vector and you introduce redundancy.
Compare: Span vs. Basis: span tells you what you can reach; a basis tells you the most efficient way to reach everything. A spanning set might have redundant vectors, but a basis never does.
Transformations are functions between vector spaces that preserve the vector space structure. The preservation of addition and scalar multiplication is exactly what makes them "linear."
A function is a linear transformation if it satisfies two properties for all vectors and all scalars :
These two conditions can be combined into one: .
Both the null space and the range are subspaces (of the domain and codomain, respectively).
The Rank-Nullity Theorem ties them together:
This is your most powerful dimension-counting tool. If you know any two of these three quantities, you can find the third.
Compare: Null Space vs. Range: the null space lives in the domain; the range lives in the codomain. A transformation is injective (one-to-one) if and only if its null space is . It's surjective (onto) if and only if its range equals the entire codomain.
Eigenvectors reveal the "natural directions" of a transformation: directions that get stretched or compressed but not rotated. This is where linear algebra connects directly to dynamics and differential equations.
The defining equation is:
The nonzero vector is an eigenvector, and the scalar is the corresponding eigenvalue. Under the transformation , the eigenvector only gets scaled by .
Finding eigenvalues step by step:
The set of all eigenvectors for a given , together with the zero vector, forms the eigenspace for that eigenvalue.
Applications are everywhere: stability analysis of systems, principal component analysis in statistics, and solving systems of differential equations of the form .
Compare: Null Space vs. Eigenspace: the null space of is exactly the eigenspace for . If is an eigenvalue, the matrix is singular (non-invertible), because .
Inner products add geometry to algebra. With an inner product, you can talk about lengths, angles, and perpendicularity. The inner product generalizes the familiar dot product to abstract vector spaces.
An inner product on a vector space is a function that takes two vectors and returns a scalar, satisfying:
The inner product induces a norm (a notion of length): .
The standard example is with the dot product: .
Two vectors are orthogonal if . An orthonormal set goes one step further: every vector is also a unit vector ().
The Gram-Schmidt process converts any basis into an orthonormal basis. Here's the idea for two vectors :
For more vectors, repeat step 2 by subtracting projections onto all previously computed vectors.
With an orthonormal basis , finding coordinates becomes trivial. The coefficient for each basis vector is just:
No system of equations to solve, no matrix inversion needed.
Compare: Orthogonal vs. Orthonormal: both involve perpendicularity, but orthonormal vectors are also unit length. Orthonormal bases make coefficient calculations trivial.
| Concept | Best Examples |
|---|---|
| Subspace verification | Zero vector test, closure under addition/scalar multiplication |
| Linear independence | Trivial solution test, determinant for square systems |
| Basis construction | Standard basis , polynomial basis |
| Dimension counting | Rank-Nullity Theorem applications |
| Transformation properties | Kernel for injectivity, range for surjectivity |
| Eigenvalue computation | Characteristic equation |
| Orthogonalization | Gram-Schmidt process |
| Projection |
If a set of vectors spans but contains 5 vectors, what can you conclude about their linear independence? Why?
Compare and contrast the null space and range of a linear transformation. How does the Rank-Nullity Theorem connect them?
A subspace of has dimension 2. What geometric object does it represent, and what must be true about its relationship to the origin?
Given a matrix with eigenvalue , what can you immediately conclude about the matrix's invertibility and null space?
Why does an orthonormal basis make finding the coordinates of a vector so much simpler than an arbitrary basis? What formula would you use?