โž—Linear Algebra and Differential Equations

Key Concepts of Basis Vectors

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Basis vectors are the backbone of everything you'll do in linear algebra, from solving systems of equations to understanding matrix transformations and eigenvalue problems. When you grasp how basis vectors work, you're building the mental framework for span, linear independence, dimension, and coordinate representation. These concepts appear repeatedly throughout the course, and exam questions will test whether you understand the underlying structure, not just the formulas.

Basis vectors give you a "coordinate language" for talking about any vector in a space. Every topic that follows, including change of basis, orthogonalization, and diagonalization, builds on this foundation. So don't just memorize that basis vectors must be linearly independent and span the space. Know why those two properties matter and how they connect to dimension, unique representation, and transformations.


Foundational Definitions and Properties

A basis is the minimal spanning set for a vector space. It has exactly enough vectors to reach everywhere, with no redundancy.

Definition of Basis Vectors

A basis is a set of vectors that does two things at once: it spans the entire vector space (any vector in the space can be written as a linear combination of the basis vectors) and it's linearly independent (no basis vector can be expressed as a combination of the others).

Why do both conditions matter? Spanning ensures you can actually represent every vector. Independence ensures that representation is unique. Once you fix a basis, every vector corresponds to exactly one set of coefficients. Those coefficients are the vector's coordinates relative to that basis.

Properties of Basis Vectors

  • The number of basis vectors equals the dimension of the space. This is the fundamental link between bases and the "size" of a vector space.
  • Unique representation is guaranteed. Every vector in the space corresponds to exactly one set of coefficients relative to the basis. If you could write a vector two different ways, the basis vectors wouldn't be independent.
  • All bases for a given space have the same size. You can swap out which vectors you use, but the count stays fixed. This is why dimension is a well-defined property of the space itself, not of any particular basis.

Standard Basis Vectors

The standard basis vectors are unit vectors along the coordinate axes. In Rn\mathbb{R}^n, these are e1=(1,0,โ€ฆ,0)\mathbf{e}_1 = (1,0,\ldots,0), e2=(0,1,โ€ฆ,0)\mathbf{e}_2 = (0,1,\ldots,0), and so on through en\mathbf{e}_n.

They provide the "default" coordinate system. When no basis is specified, assume the standard basis. They're also the simplest for computation: the coordinates of a vector relative to the standard basis are just its components. The vector (3,โˆ’2,5)(3, -2, 5) in R3\mathbb{R}^3 means 3e1โˆ’2e2+5e33\mathbf{e}_1 - 2\mathbf{e}_2 + 5\mathbf{e}_3.

Compare: Standard basis vs. arbitrary basis: both span the same space and have the same number of vectors, but standard basis vectors are orthonormal and align with coordinate axes, making computation straightforward. On exams, if you're asked to "find coordinates," check which basis you're working in.


Linear Independence and Span

These two properties are the "tests" a set of vectors must pass to qualify as a basis. Span ensures coverage; independence ensures efficiency.

Linear Independence of Basis Vectors

A set of vectors {v1,v2,โ€ฆ,vn}\{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n\} is linearly independent if the only solution to

c1v1+c2v2+โ‹ฏ+cnvn=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n = \mathbf{0}

is c1=c2=โ‹ฏ=cn=0c_1 = c_2 = \cdots = c_n = 0. This is the formal test. In practice, you check this by row-reducing the matrix whose columns are the vectors and seeing whether every column has a pivot.

If a set is dependent, at least one vector can be removed without shrinking the span. That redundancy is exactly what a basis avoids: independence guarantees that coordinates are unique.

Span of Basis Vectors

The span of a set of vectors is the collection of all their linear combinations, written span{v1,โ€ฆ,vk}\text{span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\}. It's every vector you can "reach" using scalar multiples and addition of those vectors.

A basis must span the entire space. If the span is smaller than the full space, you need more vectors. You can visualize this geometrically: two non-parallel vectors in R3\mathbb{R}^3 span a plane through the origin, not the full 3D space. You'd need a third vector (not in that plane) to complete a basis for R3\mathbb{R}^3.

Dimension of a Vector Space

Dimension equals the number of vectors in any basis. This is a theorem, not just a definition. It tells you the degrees of freedom in the space: in R3\mathbb{R}^3, you need exactly 3 coordinates to specify a point, so dimโก(R3)=3\dim(\mathbb{R}^3) = 3.

Subspaces have smaller (or equal) dimension. A plane through the origin in R3\mathbb{R}^3 is a 2-dimensional subspace. A line through the origin is 1-dimensional. The zero vector alone forms a 0-dimensional subspace.

Compare: Linear independence vs. span: independence prevents "too many" vectors (no redundancy), while span prevents "too few" (full coverage). A basis is the sweet spot where both conditions hold simultaneously. Free-response questions often ask you to verify one or both properties.


Changing and Constructing Bases

Different bases reveal different structure in the same space. Choosing the right basis can dramatically simplify a problem.

Change of Basis

A change of basis expresses vectors using different coordinates. The vector itself doesn't change; only its representation does. Think of it like converting between Celsius and Fahrenheit: the temperature is the same, but the number you write down depends on the scale.

If [v]B[\mathbf{v}]_B denotes the coordinates of v\mathbf{v} in basis BB, and you want coordinates in basis Bโ€ฒB', you use a change of basis matrix PP:

[v]Bโ€ฒ=Pโˆ’1[v]B[\mathbf{v}]_{B'} = P^{-1}[\mathbf{v}]_B

The matrix PP has the vectors of BB expressed as columns in terms of Bโ€ฒB'. Strategic basis choice simplifies problems: for instance, a matrix that looks dense in the standard basis might become diagonal in an eigenbasis.

Orthonormal Basis

An orthonormal basis has two special properties: the vectors are orthogonal (perpendicular to each other) and normalized (each has unit length). Formally, uiโ‹…uj=0\mathbf{u}_i \cdot \mathbf{u}_j = 0 for iโ‰ ji \neq j and โˆฅuiโˆฅ=1\|\mathbf{u}_i\| = 1.

Why care? Projections become simple dot products. The coordinate of v\mathbf{v} along ui\mathbf{u}_i is just vโ‹…ui\mathbf{v} \cdot \mathbf{u}_i. No matrix inversion needed. Also, the change of basis matrix for an orthonormal basis is orthogonal, meaning Pโˆ’1=PTP^{-1} = P^T. That makes computations fast and numerically stable.

Gram-Schmidt Process

The Gram-Schmidt process converts any basis into an orthonormal one. Here's how it works:

  1. Start with your first vector. Normalize it to get u1=v1โˆฅv1โˆฅ\mathbf{u}_1 = \frac{\mathbf{v}_1}{\|\mathbf{v}_1\|}.
  2. Take the next vector and subtract its projection onto all previous vectors. For the second vector: w2=v2โˆ’(v2โ‹…u1)u1\mathbf{w}_2 = \mathbf{v}_2 - (\mathbf{v}_2 \cdot \mathbf{u}_1)\mathbf{u}_1. This removes the component of v2\mathbf{v}_2 that "overlaps" with u1\mathbf{u}_1.
  3. Normalize the result: u2=w2โˆฅw2โˆฅ\mathbf{u}_2 = \frac{\mathbf{w}_2}{\|\mathbf{w}_2\|}.
  4. Repeat for each remaining vector, subtracting projections onto all previously computed orthonormal vectors before normalizing.

At each step, the span is preserved. The new orthonormal basis spans exactly the same space as the original.

Compare: Arbitrary basis vs. orthonormal basis: both span the same space, but orthonormal bases make projection, coordinate finding, and matrix inversion trivially easy. If an exam problem involves projections or least squares, think Gram-Schmidt.


Bases in Transformations and Applications

The power of basis vectors becomes clear when you see how they interact with linear transformations. The basis you choose determines how "nice" your matrix looks.

Basis Vectors in Matrix Transformations

A matrix's columns show where the basis vectors land under the transformation. If AA is a transformation matrix (relative to the standard basis), then column jj is AejA\mathbf{e}_j, the image of the jj-th standard basis vector.

This is powerful because of linearity. Once you know what AA does to each basis vector, you know what it does to every vector. Any vector v=c1e1+โ‹ฏ+cnen\mathbf{v} = c_1\mathbf{e}_1 + \cdots + c_n\mathbf{e}_n maps to Av=c1Ae1+โ‹ฏ+cnAenA\mathbf{v} = c_1 A\mathbf{e}_1 + \cdots + c_n A\mathbf{e}_n. Different bases yield different matrix representations of the same transformation.

Eigenvectors as Basis Vectors

Eigenvectors satisfy Av=ฮปvA\mathbf{v} = \lambda\mathbf{v}: the transformation only scales them by the eigenvalue ฮป\lambda, without changing their direction.

If you can find nn linearly independent eigenvectors for an nร—nn \times n matrix, they form an eigenbasis. In this basis, the matrix becomes diagonal, with eigenvalues on the diagonal. That's diagonalization: A=PDPโˆ’1A = PDP^{-1}, where PP is the matrix of eigenvectors and DD is the diagonal matrix of eigenvalues.

Why does this matter? Diagonal matrices make powers and exponentials trivial. Computing A100A^{100} reduces to PD100Pโˆ’1PD^{100}P^{-1}, and raising a diagonal matrix to a power just means raising each diagonal entry to that power. This is also central to solving systems of differential equations, where eAte^{At} appears.

Basis Vectors in Coordinate Systems

The choice of basis defines your coordinate system. Cartesian coordinates correspond to the standard basis. Other coordinate systems (polar, cylindrical, etc.) correspond to different basis choices, sometimes ones that vary from point to point.

Non-orthogonal bases are perfectly valid but messier. Coordinates still exist and are still unique, but formulas for length and angle become more complex because you can't just use the dot product in the usual way. Physical applications often suggest natural bases: principal axes of inertia, normal modes of vibration, and similar structures all point toward a basis that simplifies the problem.

Compare: Standard basis vs. eigenbasis: the standard basis is universal and simple, but an eigenbasis is tailored to a specific transformation, making that transformation diagonal. If a problem involves repeated application of a matrix (powers, exponentials), diagonalization via eigenvectors is usually the approach.


Quick Reference Table

ConceptKey Topics
Definition & uniquenessBasis vectors, Properties of basis vectors
Span and coverageSpan of basis vectors, Standard basis vectors
Linear independenceLinear independence, Properties of basis vectors
DimensionDimension of a vector space, Standard basis
OrthogonalityOrthonormal basis, Gram-Schmidt process
Change of representationChange of basis, Basis in coordinate systems
TransformationsMatrix transformations, Eigenvectors as basis
DiagonalizationEigenvectors as basis, Change of basis

Self-Check Questions

  1. What two properties must a set of vectors satisfy to be a basis, and why is each property necessary?

  2. If you have 4 vectors in R3\mathbb{R}^3, can they form a basis? Explain using the concept of dimension.

  3. Compare and contrast the standard basis with an eigenbasis for a matrix AA. When would you prefer each?

  4. Describe how the Gram-Schmidt process transforms a basis. What properties does the output have that the input might lack?

  5. If a matrix AA acts on a vector v\mathbf{v}, how can you determine AvA\mathbf{v} by only knowing what AA does to the basis vectors? Why does this work?