upgrade
upgrade

Linear Algebra and Differential Equations

Key Concepts of Vector Spaces

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Vector spaces aren't just abstract mathematical structures—they're the foundation for nearly everything you'll encounter in linear algebra and beyond. When you study differential equations, computer graphics, quantum mechanics, or data science, you're working with vector spaces whether you realize it or not. The concepts here—spanning, independence, dimension, and transformations—give you the language to describe how mathematical objects behave and interact.

Here's what you're really being tested on: can you recognize why certain vectors form a basis, how transformations preserve structure, and what properties make a subspace valid? Don't just memorize definitions—know what concept each item illustrates and how they connect. If you understand that a basis is the "minimal spanning set" and dimension tells you the "degrees of freedom," you'll crush both computational problems and conceptual questions.


Foundational Structures

Every vector space problem starts with understanding what makes a space valid and how smaller spaces live inside larger ones. The key insight is that vector spaces are defined by their behavior under two operations: addition and scalar multiplication.

Definition of a Vector Space

  • Eight axioms define validity—closure under addition and scalar multiplication, plus associativity, commutativity, identity elements, and inverses
  • Scalars come from a field (usually R\mathbb{R} or C\mathbb{C}), which determines what "multiplication by a scalar" means
  • Examples span from concrete to abstractRn\mathbb{R}^n, polynomial spaces PnP_n, and continuous function spaces C[a,b]C[a,b] all qualify

Subspaces

  • Three conditions to verify—contains the zero vector, closed under addition, closed under scalar multiplication (the "subspace test")
  • Always passes through the origin—a plane in R3\mathbb{R}^3 is only a subspace if it contains 0\vec{0}
  • Intersection of subspaces is always a subspace, but union generally is not—this is a common exam trap

Compare: Vector Space vs. Subspace—both satisfy the same axioms, but a subspace inherits its operations from the parent space. If an FRQ asks you to prove something is a subspace, use the three-condition test, not all eight axioms.


Independence and Spanning

These twin concepts determine whether you have "enough" vectors and whether you have "too many." Linear independence means no redundancy; span means complete coverage.

Linear Independence and Dependence

  • Test via the equation c1v1+c2v2++cnvn=0c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_n\vec{v}_n = \vec{0}—if only the trivial solution exists, the set is independent
  • Geometric interpretation—independent vectors point in "genuinely different" directions; dependent vectors are redundant
  • In Rn\mathbb{R}^n, you cannot have more than nn linearly independent vectors—this limits basis size

Span of Vectors

  • Span = all linear combinationsspan{v1,v2,,vk}={c1v1++ckvk:ciR}\text{span}\{\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k\} = \{c_1\vec{v}_1 + \cdots + c_k\vec{v}_k : c_i \in \mathbb{R}\}
  • Adding vectors can only expand or maintain span—never shrinks it
  • Span of a single nonzero vector in R3\mathbb{R}^3 is a line; span of two independent vectors is a plane

Basis and Dimension

  • Basis = linearly independent spanning set—the minimal set that generates the entire space
  • Dimension = number of basis vectorsdim(Rn)=n\dim(\mathbb{R}^n) = n, dim(P2)=3\dim(P_2) = 3 (polynomials up to degree 2)
  • All bases of a space have the same size—this is why dimension is well-defined and so powerful

Compare: Span vs. Basis—span tells you what you can reach; basis tells you the most efficient way to reach everything. A spanning set might have redundant vectors, but a basis never does.


Linear Transformations and Their Properties

Transformations are functions between vector spaces that "play nice" with the structure. The preservation of addition and scalar multiplication is what makes them linear.

Linear Transformations

  • Two properties define linearityT(u+v)=T(u)+T(v)T(\vec{u} + \vec{v}) = T(\vec{u}) + T(\vec{v}) and T(cv)=cT(v)T(c\vec{v}) = cT(\vec{v})
  • Matrix representation—every linear transformation T:RnRmT: \mathbb{R}^n \to \mathbb{R}^m corresponds to an m×nm \times n matrix
  • Composition of transformations equals matrix multiplication—order matters!

Null Space and Range

  • Null space (kernel) = {v:T(v)=0}\{\vec{v} : T(\vec{v}) = \vec{0}\}—measures what information the transformation "destroys"
  • Range (image) = {T(v):vV}\{T(\vec{v}) : \vec{v} \in V\}—measures what outputs are actually achievable
  • Rank-Nullity Theoremdim(null T)+dim(range T)=dim(domain)\dim(\text{null } T) + \dim(\text{range } T) = \dim(\text{domain}), your most powerful dimension-counting tool

Compare: Null Space vs. Range—null space lives in the domain, range lives in the codomain. A transformation is injective (one-to-one) iff null space = {0}\{\vec{0}\}; it's surjective (onto) iff range = entire codomain.


Eigentheory

Eigenvectors reveal the "natural directions" of a transformation—directions that get stretched or compressed but not rotated. This is where linear algebra meets dynamics and differential equations.

Eigenvalues and Eigenvectors

  • Defining equationAv=λvA\vec{v} = \lambda\vec{v} where v0\vec{v} \neq \vec{0}; the eigenvector v\vec{v} only scales by factor λ\lambda
  • Finding eigenvalues—solve det(AλI)=0\det(A - \lambda I) = 0 (the characteristic equation)
  • Applications everywhere—stability analysis, principal component analysis, solving x=Ax\vec{x}' = A\vec{x} in differential equations

Compare: Null Space vs. Eigenspace—the null space of AA is the eigenspace for λ=0\lambda = 0. If λ=0\lambda = 0 is an eigenvalue, the matrix is singular (non-invertible).


Inner Product Structures

Inner products add geometry to algebra—suddenly you can talk about lengths, angles, and perpendicularity. The inner product generalizes the dot product to abstract spaces.

Inner Product Spaces

  • Inner product axioms—linearity in first argument, symmetry (or conjugate symmetry for C\mathbb{C}), positive definiteness
  • Induces a normv=v,v\|\vec{v}\| = \sqrt{\langle \vec{v}, \vec{v} \rangle} gives vector "length"
  • Standard exampleRn\mathbb{R}^n with dot product: u,v=uv=uivi\langle \vec{u}, \vec{v} \rangle = \vec{u} \cdot \vec{v} = \sum u_i v_i

Orthogonality and Orthonormal Bases

  • Orthogonal means perpendicularu,v=0\langle \vec{u}, \vec{v} \rangle = 0; orthonormal adds v=1\|\vec{v}\| = 1 for each vector
  • Gram-Schmidt process—algorithm to convert any basis into an orthonormal basis
  • Projection formula simplifies—with orthonormal basis {ei}\{e_i\}, coordinates are just ci=v,eic_i = \langle \vec{v}, e_i \rangle

Compare: Orthogonal vs. Orthonormal—both involve perpendicularity, but orthonormal vectors are also unit length. Orthonormal bases make coefficient calculations trivial—no matrix inversion needed.


Quick Reference Table

ConceptBest Examples
Subspace verificationZero vector test, closure under addition/scalar multiplication
Linear independenceTrivial solution test, determinant ≠ 0 for square systems
Basis constructionStandard basis {e1,,en}\{e_1, \ldots, e_n\}, polynomial basis {1,x,x2}\{1, x, x^2\}
Dimension countingRank-Nullity Theorem applications
Transformation propertiesKernel for injectivity, range for surjectivity
Eigenvalue computationCharacteristic equation det(AλI)=0\det(A - \lambda I) = 0
OrthogonalizationGram-Schmidt process
Projectionprojuv=v,uu,uu\text{proj}_{\vec{u}}\vec{v} = \frac{\langle \vec{v}, \vec{u} \rangle}{\langle \vec{u}, \vec{u} \rangle}\vec{u}

Self-Check Questions

  1. If a set of vectors spans R4\mathbb{R}^4 but contains 5 vectors, what can you conclude about their linear independence? Why?

  2. Compare and contrast the null space and range of a linear transformation. How does the Rank-Nullity Theorem connect them?

  3. A subspace of R3\mathbb{R}^3 has dimension 2. What geometric object does it represent, and what must be true about its relationship to the origin?

  4. Given a matrix with eigenvalue λ=0\lambda = 0, what can you immediately conclude about the matrix's invertibility and null space?

  5. Why does an orthonormal basis make finding the coordinates of a vector so much simpler than an arbitrary basis? What formula would you use?