upgrade
upgrade

Linear Algebra and Differential Equations

Key Concepts of Basis Vectors

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Basis vectors are the backbone of everything you'll do in linear algebra—from solving systems of equations to understanding matrix transformations and eigenvalue problems. When you grasp how basis vectors work, you're not just learning definitions; you're building the mental framework for span, linear independence, dimension, and coordinate representation. These concepts appear repeatedly throughout the course, and exam questions will test whether you understand the underlying structure, not just the formulas.

Here's the key insight: basis vectors give us a "coordinate language" for talking about any vector in a space. Every topic that follows—change of basis, orthogonalization, diagonalization—builds on this foundation. So don't just memorize that basis vectors must be linearly independent and span the space. Know why those two properties matter and how they connect to dimension, unique representation, and transformations.


Foundational Definitions and Properties

A basis is the minimal spanning set for a vector space—it has exactly enough vectors to reach everywhere, with no redundancy.

Definition of Basis Vectors

  • A basis is a set of vectors that spans the entire vector space—meaning any vector in the space can be written as a linear combination of basis vectors
  • Linear independence is required—no basis vector can be expressed as a combination of the others, ensuring no redundancy
  • Coordinates arise from the basis—once you fix a basis, every vector has a unique representation as coefficients of the basis vectors

Properties of Basis Vectors

  • The number of basis vectors equals the dimension—this is the fundamental link between bases and the "size" of a vector space
  • Unique representation guaranteed—every vector in the space corresponds to exactly one set of coefficients relative to the basis
  • All bases for a space have the same size—you can swap out which vectors you use, but the count stays fixed

Standard Basis Vectors

  • Standard basis vectors are unit vectors along coordinate axes—in Rn\mathbb{R}^n, these are e1=(1,0,,0)\mathbf{e}_1 = (1,0,\ldots,0), e2=(0,1,,0)\mathbf{e}_2 = (0,1,\ldots,0), etc.
  • They provide the "default" coordinate system—when no basis is specified, assume the standard basis
  • Simplest for computation—coordinates of a vector relative to the standard basis are just its components

Compare: Standard basis vs. arbitrary basis—both span the same space and have the same number of vectors, but standard basis vectors are orthonormal and align with coordinate axes, making computation straightforward. On exams, if you're asked to "find coordinates," check which basis you're working in.


Linear Independence and Span

These two properties are the "tests" a set of vectors must pass to qualify as a basis—span ensures coverage, independence ensures efficiency.

Linear Independence of Basis Vectors

  • A set is linearly independent if the only solution to c1v1+c2v2++cnvn=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n = \mathbf{0} is all coefficients equal to zero—this is the formal test
  • Dependent sets contain redundancy—at least one vector can be removed without shrinking the span
  • Independence guarantees unique coordinates—without it, the same vector could have multiple representations

Span of Basis Vectors

  • Span is the set of all linear combinations—written span{v1,,vk}\text{span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\}, it's every vector you can "reach" using those vectors
  • A basis must span the entire space—if the span is smaller than the full space, you need more vectors
  • Span can be visualized geometrically—two non-parallel vectors in R3\mathbb{R}^3 span a plane, not the full space

Dimension of a Vector Space

  • Dimension equals the number of vectors in any basis—this is a theorem, not just a definition
  • Dimension tells you degrees of freedom—in R3\mathbb{R}^3, you need exactly 3 coordinates to specify a point
  • Subspaces have smaller dimension—a plane through the origin in R3\mathbb{R}^3 is a 2-dimensional subspace

Compare: Linear independence vs. span—independence prevents "too many" vectors (no redundancy), while span prevents "too few" (full coverage). A basis is the sweet spot where both conditions hold simultaneously. FRQs often ask you to verify one or both properties.


Changing and Constructing Bases

Different bases reveal different structure in the same space—choosing the right basis can dramatically simplify a problem.

Change of Basis

  • Change of basis expresses vectors using different "coordinates"—the vector itself doesn't change, only its representation
  • A change of basis matrix PP converts coordinates—if [v]B[\mathbf{v}]_B are coordinates in basis BB, then [v]B=P1[v]B[\mathbf{v}]_{B'} = P^{-1}[\mathbf{v}]_B
  • Strategic basis choice simplifies problems—diagonal matrices, for instance, are much easier to work with than dense ones

Orthonormal Basis

  • Orthonormal means orthogonal (perpendicular) and normalized (unit length)—formally, uiuj=0\mathbf{u}_i \cdot \mathbf{u}_j = 0 for iji \neq j and ui=1\|\mathbf{u}_i\| = 1
  • Projections become simple dot products—the coordinate of v\mathbf{v} along ui\mathbf{u}_i is just vui\mathbf{v} \cdot \mathbf{u}_i
  • The change of basis matrix is orthogonal—meaning P1=PTP^{-1} = P^T, which makes computations fast and stable

Gram-Schmidt Process

  • Gram-Schmidt converts any basis to an orthonormal one—it systematically removes the "overlap" between vectors
  • The algorithm: orthogonalize, then normalize—subtract projections onto previous vectors, then scale to unit length
  • Preserves the span at each step—the new orthonormal basis spans exactly the same space as the original

Compare: Arbitrary basis vs. orthonormal basis—both span the same space, but orthonormal bases make projection, coordinate finding, and matrix inversion trivially easy. If an exam problem involves projections or least squares, think Gram-Schmidt.


Bases in Transformations and Applications

The power of basis vectors becomes clear when you see how they interact with linear transformations—the basis you choose determines how "nice" your matrix looks.

Basis Vectors in Matrix Transformations

  • A matrix's columns show where basis vectors land—if AA is a transformation matrix, column jj is AejA\mathbf{e}_j
  • Understanding the action on basis vectors reveals the full transformation—linearity means everything else follows
  • Different bases yield different matrix representations—the same transformation can look simple or complicated depending on your choice

Eigenvectors as Basis Vectors

  • Eigenvectors satisfy Av=λvA\mathbf{v} = \lambda\mathbf{v}—they only get scaled, not rotated, by the transformation
  • An eigenbasis diagonalizes the matrix—if you can form a basis of eigenvectors, the matrix becomes diagonal in that basis
  • Diagonalization simplifies powers and exponentials—computing A100A^{100} is trivial when AA is diagonal

Basis Vectors in Coordinate Systems

  • The choice of basis defines your coordinate system—Cartesian, polar, and other systems correspond to different basis choices
  • Non-orthogonal bases are valid but messier—coordinates still exist, but formulas for length and angle become more complex
  • Physical applications often suggest natural bases—principal axes, normal modes, and other structures guide basis selection

Compare: Standard basis vs. eigenbasis—the standard basis is universal and simple, but an eigenbasis is tailored to a specific transformation, making that transformation diagonal. Exam tip: if a problem involves repeated application of a matrix, diagonalization via eigenvectors is usually the approach.


Quick Reference Table

ConceptBest Examples
Definition & uniquenessBasis vectors, Properties of basis vectors
Span and coverageSpan of basis vectors, Standard basis vectors
Linear independenceLinear independence, Properties of basis vectors
DimensionDimension of a vector space, Standard basis
OrthogonalityOrthonormal basis, Gram-Schmidt process
Change of representationChange of basis, Basis in coordinate systems
TransformationsMatrix transformations, Eigenvectors as basis
DiagonalizationEigenvectors as basis, Change of basis

Self-Check Questions

  1. What two properties must a set of vectors satisfy to be a basis, and why is each property necessary?

  2. If you have 4 vectors in R3\mathbb{R}^3, can they form a basis? Explain using the concept of dimension.

  3. Compare and contrast the standard basis with an eigenbasis for a matrix AA. When would you prefer each?

  4. Describe how the Gram-Schmidt process transforms a basis. What properties does the output have that the input might lack?

  5. If a matrix AA acts on a vector v\mathbf{v}, how can you determine AvA\mathbf{v} by only knowing what AA does to the basis vectors? Why does this work?