Definition of vector spaces
A vector space is a collection of objects (called vectors) that you can add together and multiply by scalars, with the results always staying in the collection. This structure shows up everywhere in mathematics, physics, and computer science because it captures the essence of "linear" behavior in a single, clean framework.
What makes vector spaces powerful is their generality. Once you prove something about vector spaces in the abstract, that result applies to arrows in 3D space, to polynomials, to matrices, and to functions. You learn one set of rules and unlock tools for dozens of different settings.
Properties of vector spaces
A vector space over a field (usually or ) must satisfy all of the following axioms. Missing even one means you don't have a vector space.
Addition axioms:
- Closure under addition: If and are in the space, then is also in the space.
- Associativity:
- Commutativity:
- Zero vector: There exists a vector such that for all .
- Additive inverses: For every , there exists such that .
Scalar multiplication axioms:
- Closure under scalar multiplication: If and is in the space, then is in the space.
- Distributivity over vector addition:
- Distributivity over scalar addition:
- Associativity of scalar multiplication:
- Multiplicative identity:
Examples of vector spaces
- (real coordinate spaces): Vectors with real-number components. is the familiar plane, is 3D space.
- (complex coordinate spaces): Same idea, but components are complex numbers.
- Polynomial spaces : All polynomials of degree . For example, includes vectors like . Addition and scalar multiplication work the way you'd expect.
- Matrix spaces : All matrices. You add them entry-by-entry and scale them entry-by-entry.
- Function spaces: Sets of functions (say, all continuous functions on ) that satisfy the axioms when you define addition and scaling pointwise.
Non-examples of vector spaces
Non-examples sharpen your understanding of why each axiom matters.
- The set of positive real numbers (under usual addition): The additive inverse of is , which isn't positive. No additive inverses means no vector space.
- The set of integers (as a space over ): Scalar multiplication fails closure. Multiplying the integer by the scalar gives , which isn't an integer.
- A circle in : Scaling a point on a unit circle by moves it off the circle. No closure under scalar multiplication.
- A plane in that doesn't pass through the origin: It won't contain the zero vector, so it fails that axiom. (A plane through the origin, however, is a valid subspace.)
Vector operations
The two core operations in any vector space are vector addition and scalar multiplication. Everything else builds on these.
Vector addition
Vector addition combines two vectors component by component. In , for instance: .
Geometrically, you can visualize this with the parallelogram law: place the tails of two vectors at the same point, complete the parallelogram, and the diagonal is the sum.
The key properties carry over from the axioms:
- Commutative:
- Associative:
- Identity:
Scalar multiplication
Scalar multiplication scales a vector's magnitude and can reverse its direction. Multiplying each component by the scalar does the job: .
- A scalar stretches the vector; a scalar between and shrinks it.
- A negative scalar flips the vector's direction.
- The scalar leaves the vector unchanged: .
- Distributive properties connect the two operations: and .
Linear combinations
A linear combination takes a set of vectors, scales each one, and adds the results:
The scalars are called coefficients and can be any elements of the field.
Linear combinations are the central building block for nearly everything that follows. Span, linear independence, bases, and solutions to linear systems are all defined in terms of linear combinations.
Subspaces
A subspace is a subset of a vector space that is itself a vector space under the same addition and scalar multiplication. Think of it as a "smaller world" living inside a bigger one that still follows all the same rules.
Definition of subspaces
A subset of a vector space is a subspace if:
- uses the same addition and scalar multiplication as .
- satisfies all the vector space axioms on its own.
Most of the axioms (associativity, commutativity, distributivity, etc.) are automatically inherited from the parent space. So you don't need to check all ten axioms from scratch. You just need the subspace test.
Criteria for subspaces (the subspace test)
To verify that is a subspace, check three things:
- Non-empty: contains the zero vector .
- Closed under addition: If , then .
- Closed under scalar multiplication: If and , then .
You can combine steps 2 and 3 into a single check: verify closure under linear combinations. If and , then .

Common subspaces
- Null space (kernel) of a matrix : all vectors such that . This is always a subspace.
- Column space (image) of a matrix: the span of the columns of . Tells you which outputs are reachable.
- Row space: the span of the rows of .
- Eigenspaces: for a given eigenvalue , the set of all vectors satisfying .
- Solution sets of homogeneous systems () are subspaces. Non-homogeneous solution sets ( with ) are not subspaces because they don't contain .
Span and linear independence
These two concepts work together to describe the "reach" and "efficiency" of a set of vectors.
Span of vectors
The span of a set of vectors is the collection of all linear combinations you can form from them:
The span is always a subspace (it's the smallest subspace containing those vectors). If the span equals the entire vector space , you say the vectors span , meaning every vector in can be built from them.
Linear independence vs. dependence
A set of vectors is linearly independent if the only way to get the zero vector from a linear combination is the trivial way:
If there's a non-trivial combination that gives , the set is linearly dependent. That means at least one vector in the set is redundant: it can be written as a linear combination of the others.
Geometric intuition:
- In : two vectors are independent if they point in different (non-parallel) directions.
- In : three vectors are independent if they don't all lie in the same plane.
Basis of a vector space
A basis is a set of vectors that is both linearly independent and spans the entire space. It's the "just right" set: no redundant vectors, but enough to reach everything.
Key facts about bases:
- Every vector in the space can be written as a unique linear combination of basis vectors.
- A basis is a minimal spanning set (remove any vector and it no longer spans).
- A basis is a maximal independent set (add any vector and it becomes dependent).
- The standard basis for is .
- Different bases give different coordinate representations of the same vectors, which is useful for simplifying problems.
Dimension of vector spaces
The dimension of a vector space is the number of vectors in any basis. This single number captures a lot about the space's structure.
Finite vs. infinite dimensions
- Finite-dimensional: has dimension . The polynomial space (polynomials of degree ) has dimension 4, because a basis is .
- Infinite-dimensional: The space of all polynomials (no degree bound) is infinite-dimensional. So is the space of continuous functions on . No finite set of vectors can span these spaces.
Dimension theorem
Every basis of a given vector space has the same number of elements. This is why dimension is well-defined: it doesn't depend on which basis you choose.
Useful consequences:
- If , then any set of more than vectors in must be linearly dependent.
- Any linearly independent set of exactly vectors in automatically spans (and is therefore a basis).
- If is a subspace of , then .
Coordinate systems
Once you fix a basis for a space, every vector can be written uniquely as:
The scalars are the coordinates of relative to that basis. The standard basis in gives the familiar coordinates you're used to. Switching to a different basis can simplify a problem dramatically, which is why change of basis formulas matter.
Vector space transformations
A transformation between vector spaces is a function that sends vectors from one space to another. The most important kind preserves the linear structure.

Linear transformations
A function is a linear transformation if it satisfies two conditions for all vectors and all scalars :
These two conditions together mean preserves linear combinations. In finite-dimensional spaces, every linear transformation can be represented by a matrix. Familiar geometric operations like rotations, reflections, projections, and scaling are all linear transformations.
Kernel and image
Every linear transformation has two important subspaces associated with it:
- Kernel (null space): . This is a subspace of . If the kernel contains only , then is injective (one-to-one).
- Image (range): . This is a subspace of . If the image equals all of , then is surjective (onto).
The Rank-Nullity Theorem ties these together:
This is one of the most useful results in linear algebra. It tells you that the "information lost" by (the kernel) plus the "information preserved" (the image) always adds up to the dimension of the domain.
Isomorphisms between spaces
An isomorphism is a linear transformation that is both injective and surjective (bijective). Two vector spaces are isomorphic if there exists an isomorphism between them.
Isomorphic spaces are structurally identical from a linear algebra perspective. The key result: two finite-dimensional vector spaces over the same field are isomorphic if and only if they have the same dimension. So , , and are all isomorphic because each has dimension 3.
Inner product spaces
An inner product space is a vector space equipped with an additional operation called an inner product. This operation lets you define geometric concepts like length, distance, and angle within the algebraic framework of vector spaces.
Definition of inner products
An inner product on a vector space is a function satisfying:
- Positive definiteness: , with equality only when .
- Conjugate symmetry: . (For real spaces, this simplifies to plain symmetry: .)
- Linearity in the first argument: .
The standard example is the dot product in : .
From an inner product, you can define the norm (length) of a vector: .
Orthogonality and orthonormality
Two vectors are orthogonal if . In geometric terms, they're perpendicular.
An orthonormal set goes one step further: the vectors are orthogonal and each has unit length ().
Why care? Orthonormal bases make computations much cleaner. If is an orthonormal basis, then the coordinates of any vector are simply . No systems of equations needed.
Gram-Schmidt process
The Gram-Schmidt process takes any linearly independent set and produces an orthonormal basis spanning the same subspace.
Steps:
-
Start with a linearly independent set .
-
Set .
-
For each subsequent vector , subtract off its projections onto all previously computed 's:
-
Normalize each to get a unit vector: .
The result is orthonormal and spans the same subspace as the original set. This process is used in least squares problems, QR factorization, and quantum mechanics.
Applications of vector spaces
Vector spaces aren't just abstract theory. They provide the language and tools for solving concrete problems across many fields.
Linear algebra connections
- Systems of linear equations translate directly into questions about spans, null spaces, and column spaces.
- Eigenvalue problems arise in differential equations and dynamical systems, where you need to find vectors that a transformation only scales.
- Matrix decompositions (SVD, LU, QR) break matrices into simpler pieces for data compression, numerical stability, and analysis.
- Least squares approximation finds the "best fit" when an exact solution doesn't exist, using projections onto subspaces.
- Fourier analysis represents signals as linear combinations of sine and cosine functions, treating them as vectors in a function space.
Physics and engineering uses
- Quantum mechanics represents particle states as vectors in Hilbert spaces (infinite-dimensional inner product spaces).
- Electromagnetic theory models fields as vector-valued functions across space.
- Structural engineering uses finite element methods, which discretize continuous structures into systems of linear equations.
- Control theory describes dynamic systems using state vectors and linear transformations.
- Robotics relies on transformation matrices for motion planning and positioning.
Computer graphics applications
- 3D transformations (rotation, translation, scaling) manipulate objects and cameras using matrix multiplication.
- Texture mapping applies coordinate transformations to wrap images onto surfaces.
- Ray tracing computes light paths using vector operations to produce realistic images.
- Animation interpolates between positions and orientations in vector spaces to create smooth motion.
- Color spaces treat colors as vectors, and transformations convert between different color representations (RGB, HSV, etc.).