โž—Linear Algebra and Differential Equations

Key Concepts of Vector Spaces

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Vector spaces are the foundation for nearly everything you'll encounter in linear algebra and beyond. Differential equations, computer graphics, quantum mechanics, and data science all rely on vector spaces. The concepts here, spanning, independence, dimension, and transformations, give you the language to describe how mathematical objects behave and interact.

What you're really being tested on: can you recognize why certain vectors form a basis, how transformations preserve structure, and what properties make a subspace valid? Don't just memorize definitions. Know what each concept illustrates and how they connect. A basis is the "minimal spanning set," and dimension tells you the "degrees of freedom" in a space. Understanding those ideas will carry you through both computational problems and conceptual questions.


Foundational Structures

Every vector space problem starts with understanding what makes a space valid and how smaller spaces live inside larger ones. Vector spaces are defined entirely by their behavior under two operations: addition and scalar multiplication.

Definition of a Vector Space

A vector space is a set VV equipped with vector addition and scalar multiplication that satisfies eight axioms. These include closure under both operations, associativity and commutativity of addition, existence of an additive identity (the zero vector) and additive inverses, and compatibility of scalar multiplication with field operations (distributive laws, multiplicative identity).

  • Scalars come from a field, usually R\mathbb{R} or C\mathbb{C}. The field determines what "multiplication by a scalar" means.
  • Examples range from concrete to abstract. Rn\mathbb{R}^n (columns of nn real numbers), polynomial spaces PnP_n (polynomials of degree at most nn), and continuous function spaces C[a,b]C[a,b] all satisfy these axioms.

The eight axioms might feel like a lot, but most of the time you're working in spaces where they're clearly satisfied. Where they really matter is when you need to show something isn't a vector space (for instance, a set where addition doesn't stay inside the set).

Subspaces

A subspace is a subset of a vector space that is itself a vector space under the same operations. Rather than checking all eight axioms, you only need three conditions (the subspace test):

  1. The subset contains the zero vector 0โƒ—\vec{0}.
  2. It's closed under addition: if uโƒ—\vec{u} and vโƒ—\vec{v} are in the subset, so is uโƒ—+vโƒ—\vec{u} + \vec{v}.
  3. It's closed under scalar multiplication: if vโƒ—\vec{v} is in the subset and cc is a scalar, then cvโƒ—c\vec{v} is in the subset.

A subspace always passes through the origin. A plane in R3\mathbb{R}^3 is only a subspace if it contains 0โƒ—\vec{0}. A plane that's been shifted away from the origin fails the test.

One common exam trap: the intersection of two subspaces is always a subspace, but the union generally is not. Think about two different lines through the origin in R2\mathbb{R}^2. Their union contains vectors from both lines, but adding a vector from one line to a vector from the other can land you outside both lines.

Compare: Vector Space vs. Subspace: both satisfy the same axioms, but a subspace inherits its operations from the parent space. If you're asked to prove something is a subspace, use the three-condition test, not all eight axioms.


Independence and Spanning

These twin concepts determine whether you have "enough" vectors and whether you have "too many." Linear independence means no redundancy; span means complete coverage.

Linear Independence and Dependence

A set of vectors {vโƒ—1,vโƒ—2,โ€ฆ,vโƒ—n}\{\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_n\} is linearly independent if the only solution to

c1vโƒ—1+c2vโƒ—2+โ‹ฏ+cnvโƒ—n=0โƒ—c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_n\vec{v}_n = \vec{0}

is c1=c2=โ‹ฏ=cn=0c_1 = c_2 = \cdots = c_n = 0 (the trivial solution). If any nontrivial solution exists, the set is linearly dependent, meaning at least one vector can be written as a combination of the others.

  • Geometric interpretation: independent vectors point in "genuinely different" directions. Dependent vectors include at least one that's redundant.
  • In Rn\mathbb{R}^n, you can never have more than nn linearly independent vectors. This is a hard ceiling that limits basis size.

To test independence in practice, set up the equation above, form the corresponding matrix, and row reduce. If every column has a pivot, the set is independent.

Span of Vectors

The span of a set of vectors is the collection of all possible linear combinations of those vectors:

span{vโƒ—1,vโƒ—2,โ€ฆ,vโƒ—k}={c1vโƒ—1+c2vโƒ—2+โ‹ฏ+ckvโƒ—k:ciโˆˆR}\text{span}\{\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k\} = \{c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_k\vec{v}_k : c_i \in \mathbb{R}\}

  • Adding more vectors to a set can only expand or maintain the span. It never shrinks it.
  • The span of a single nonzero vector in R3\mathbb{R}^3 is a line through the origin. The span of two independent vectors is a plane through the origin.

Basis and Dimension

A basis for a vector space VV is a set of vectors that is both linearly independent and spans VV. It's the minimal spanning set: remove any vector and you lose coverage; add any vector and you introduce redundancy.

  • Dimension equals the number of vectors in any basis. For example, dimโก(Rn)=n\dim(\mathbb{R}^n) = n, and dimโก(P2)=3\dim(P_2) = 3 because {1,x,x2}\{1, x, x^2\} is a basis for polynomials of degree at most 2.
  • All bases of a given space have the same number of vectors. This is a theorem, not obvious, and it's the reason dimension is well-defined.

Compare: Span vs. Basis: span tells you what you can reach; a basis tells you the most efficient way to reach everything. A spanning set might have redundant vectors, but a basis never does.


Linear Transformations and Their Properties

Transformations are functions between vector spaces that preserve the vector space structure. The preservation of addition and scalar multiplication is exactly what makes them "linear."

Linear Transformations

A function T:Vโ†’WT: V \to W is a linear transformation if it satisfies two properties for all vectors uโƒ—,vโƒ—โˆˆV\vec{u}, \vec{v} \in V and all scalars cc:

  1. T(uโƒ—+vโƒ—)=T(uโƒ—)+T(vโƒ—)T(\vec{u} + \vec{v}) = T(\vec{u}) + T(\vec{v})
  2. T(cvโƒ—)=cT(vโƒ—)T(c\vec{v}) = cT(\vec{v})

These two conditions can be combined into one: T(c1uโƒ—+c2vโƒ—)=c1T(uโƒ—)+c2T(vโƒ—)T(c_1\vec{u} + c_2\vec{v}) = c_1T(\vec{u}) + c_2T(\vec{v}).

  • Matrix representation: every linear transformation T:Rnโ†’RmT: \mathbb{R}^n \to \mathbb{R}^m corresponds to an mร—nm \times n matrix AA, so that T(vโƒ—)=Avโƒ—T(\vec{v}) = A\vec{v}.
  • Composition of transformations corresponds to matrix multiplication. Order matters: T2โˆ˜T1T_2 \circ T_1 means apply T1T_1 first, which corresponds to A2A1A_2 A_1.

Null Space and Range

  • The null space (or kernel) of TT is {vโƒ—โˆˆV:T(vโƒ—)=0โƒ—}\{\vec{v} \in V : T(\vec{v}) = \vec{0}\}. It captures what the transformation "destroys," the inputs that get mapped to zero.
  • The range (or image) of TT is {T(vโƒ—):vโƒ—โˆˆV}\{T(\vec{v}) : \vec{v} \in V\}. It captures what outputs are actually achievable.

Both the null space and the range are subspaces (of the domain and codomain, respectively).

The Rank-Nullity Theorem ties them together:

dimโก(nullย T)+dimโก(rangeย T)=dimโก(domain)\dim(\text{null } T) + \dim(\text{range } T) = \dim(\text{domain})

This is your most powerful dimension-counting tool. If you know any two of these three quantities, you can find the third.

Compare: Null Space vs. Range: the null space lives in the domain; the range lives in the codomain. A transformation is injective (one-to-one) if and only if its null space is {0โƒ—}\{\vec{0}\}. It's surjective (onto) if and only if its range equals the entire codomain.


Eigentheory

Eigenvectors reveal the "natural directions" of a transformation: directions that get stretched or compressed but not rotated. This is where linear algebra connects directly to dynamics and differential equations.

Eigenvalues and Eigenvectors

The defining equation is:

Avโƒ—=ฮปvโƒ—,vโƒ—โ‰ 0โƒ—A\vec{v} = \lambda\vec{v}, \quad \vec{v} \neq \vec{0}

The nonzero vector vโƒ—\vec{v} is an eigenvector, and the scalar ฮป\lambda is the corresponding eigenvalue. Under the transformation AA, the eigenvector only gets scaled by ฮป\lambda.

Finding eigenvalues step by step:

  1. Rewrite Avโƒ—=ฮปvโƒ—A\vec{v} = \lambda\vec{v} as (Aโˆ’ฮปI)vโƒ—=0โƒ—(A - \lambda I)\vec{v} = \vec{0}.
  2. For a nonzero solution vโƒ—\vec{v} to exist, the matrix (Aโˆ’ฮปI)(A - \lambda I) must be singular.
  3. Solve the characteristic equation: detโก(Aโˆ’ฮปI)=0\det(A - \lambda I) = 0. The solutions are the eigenvalues.
  4. For each eigenvalue ฮป\lambda, find the eigenvectors by solving (Aโˆ’ฮปI)vโƒ—=0โƒ—(A - \lambda I)\vec{v} = \vec{0} (i.e., find the null space of Aโˆ’ฮปIA - \lambda I).

The set of all eigenvectors for a given ฮป\lambda, together with the zero vector, forms the eigenspace for that eigenvalue.

Applications are everywhere: stability analysis of systems, principal component analysis in statistics, and solving systems of differential equations of the form xโƒ—โ€ฒ=Axโƒ—\vec{x}' = A\vec{x}.

Compare: Null Space vs. Eigenspace: the null space of AA is exactly the eigenspace for ฮป=0\lambda = 0. If ฮป=0\lambda = 0 is an eigenvalue, the matrix is singular (non-invertible), because detโก(A)=0\det(A) = 0.


Inner Product Structures

Inner products add geometry to algebra. With an inner product, you can talk about lengths, angles, and perpendicularity. The inner product generalizes the familiar dot product to abstract vector spaces.

Inner Product Spaces

An inner product on a vector space VV is a function โŸจโ‹…,โ‹…โŸฉ\langle \cdot, \cdot \rangle that takes two vectors and returns a scalar, satisfying:

  • Linearity in the first argument: โŸจauโƒ—+bvโƒ—,wโƒ—โŸฉ=aโŸจuโƒ—,wโƒ—โŸฉ+bโŸจvโƒ—,wโƒ—โŸฉ\langle a\vec{u} + b\vec{v}, \vec{w} \rangle = a\langle \vec{u}, \vec{w} \rangle + b\langle \vec{v}, \vec{w} \rangle
  • Symmetry (or conjugate symmetry over C\mathbb{C}): โŸจuโƒ—,vโƒ—โŸฉ=โŸจvโƒ—,uโƒ—โŸฉ\langle \vec{u}, \vec{v} \rangle = \langle \vec{v}, \vec{u} \rangle
  • Positive definiteness: โŸจvโƒ—,vโƒ—โŸฉ>0\langle \vec{v}, \vec{v} \rangle > 0 for all vโƒ—โ‰ 0โƒ—\vec{v} \neq \vec{0}, and โŸจ0โƒ—,0โƒ—โŸฉ=0\langle \vec{0}, \vec{0} \rangle = 0

The inner product induces a norm (a notion of length): โˆฅvโƒ—โˆฅ=โŸจvโƒ—,vโƒ—โŸฉ\|\vec{v}\| = \sqrt{\langle \vec{v}, \vec{v} \rangle}.

The standard example is Rn\mathbb{R}^n with the dot product: โŸจuโƒ—,vโƒ—โŸฉ=uโƒ—โ‹…vโƒ—=โˆ‘i=1nuivi\langle \vec{u}, \vec{v} \rangle = \vec{u} \cdot \vec{v} = \sum_{i=1}^{n} u_i v_i.

Orthogonality and Orthonormal Bases

Two vectors are orthogonal if โŸจuโƒ—,vโƒ—โŸฉ=0\langle \vec{u}, \vec{v} \rangle = 0. An orthonormal set goes one step further: every vector is also a unit vector (โˆฅvโƒ—โˆฅ=1\|\vec{v}\| = 1).

The Gram-Schmidt process converts any basis into an orthonormal basis. Here's the idea for two vectors vโƒ—1,vโƒ—2\vec{v}_1, \vec{v}_2:

  1. Set uโƒ—1=vโƒ—1\vec{u}_1 = \vec{v}_1.
  2. Subtract the component of vโƒ—2\vec{v}_2 along uโƒ—1\vec{u}_1: uโƒ—2=vโƒ—2โˆ’โŸจvโƒ—2,uโƒ—1โŸฉโŸจuโƒ—1,uโƒ—1โŸฉuโƒ—1\vec{u}_2 = \vec{v}_2 - \frac{\langle \vec{v}_2, \vec{u}_1 \rangle}{\langle \vec{u}_1, \vec{u}_1 \rangle}\vec{u}_1.
  3. Normalize each uโƒ—i\vec{u}_i by dividing by its norm: eโƒ—i=uโƒ—iโˆฅuโƒ—iโˆฅ\vec{e}_i = \frac{\vec{u}_i}{\|\vec{u}_i\|}.

For more vectors, repeat step 2 by subtracting projections onto all previously computed vectors.

With an orthonormal basis {eโƒ—1,โ€ฆ,eโƒ—n}\{\vec{e}_1, \ldots, \vec{e}_n\}, finding coordinates becomes trivial. The coefficient for each basis vector is just:

ci=โŸจvโƒ—,eโƒ—iโŸฉc_i = \langle \vec{v}, \vec{e}_i \rangle

No system of equations to solve, no matrix inversion needed.

Compare: Orthogonal vs. Orthonormal: both involve perpendicularity, but orthonormal vectors are also unit length. Orthonormal bases make coefficient calculations trivial.


Quick Reference Table

ConceptBest Examples
Subspace verificationZero vector test, closure under addition/scalar multiplication
Linear independenceTrivial solution test, determinant โ‰ 0\neq 0 for square systems
Basis constructionStandard basis {eโƒ—1,โ€ฆ,eโƒ—n}\{\vec{e}_1, \ldots, \vec{e}_n\}, polynomial basis {1,x,x2}\{1, x, x^2\}
Dimension countingRank-Nullity Theorem applications
Transformation propertiesKernel for injectivity, range for surjectivity
Eigenvalue computationCharacteristic equation detโก(Aโˆ’ฮปI)=0\det(A - \lambda I) = 0
OrthogonalizationGram-Schmidt process
Projectionprojuโƒ—vโƒ—=โŸจvโƒ—,uโƒ—โŸฉโŸจuโƒ—,uโƒ—โŸฉuโƒ—\text{proj}_{\vec{u}}\vec{v} = \frac{\langle \vec{v}, \vec{u} \rangle}{\langle \vec{u}, \vec{u} \rangle}\vec{u}

Self-Check Questions

  1. If a set of vectors spans R4\mathbb{R}^4 but contains 5 vectors, what can you conclude about their linear independence? Why?

  2. Compare and contrast the null space and range of a linear transformation. How does the Rank-Nullity Theorem connect them?

  3. A subspace of R3\mathbb{R}^3 has dimension 2. What geometric object does it represent, and what must be true about its relationship to the origin?

  4. Given a matrix with eigenvalue ฮป=0\lambda = 0, what can you immediately conclude about the matrix's invertibility and null space?

  5. Why does an orthonormal basis make finding the coordinates of a vector so much simpler than an arbitrary basis? What formula would you use?