Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Basis vectors are the backbone of everything you'll do in linear algebra, from solving systems of equations to understanding matrix transformations and eigenvalue problems. When you grasp how basis vectors work, you're building the mental framework for span, linear independence, dimension, and coordinate representation. These concepts appear repeatedly throughout the course, and exam questions will test whether you understand the underlying structure, not just the formulas.
Basis vectors give you a "coordinate language" for talking about any vector in a space. Every topic that follows, including change of basis, orthogonalization, and diagonalization, builds on this foundation. So don't just memorize that basis vectors must be linearly independent and span the space. Know why those two properties matter and how they connect to dimension, unique representation, and transformations.
A basis is the minimal spanning set for a vector space. It has exactly enough vectors to reach everywhere, with no redundancy.
A basis is a set of vectors that does two things at once: it spans the entire vector space (any vector in the space can be written as a linear combination of the basis vectors) and it's linearly independent (no basis vector can be expressed as a combination of the others).
Why do both conditions matter? Spanning ensures you can actually represent every vector. Independence ensures that representation is unique. Once you fix a basis, every vector corresponds to exactly one set of coefficients. Those coefficients are the vector's coordinates relative to that basis.
The standard basis vectors are unit vectors along the coordinate axes. In , these are , , and so on through .
They provide the "default" coordinate system. When no basis is specified, assume the standard basis. They're also the simplest for computation: the coordinates of a vector relative to the standard basis are just its components. The vector in means .
Compare: Standard basis vs. arbitrary basis: both span the same space and have the same number of vectors, but standard basis vectors are orthonormal and align with coordinate axes, making computation straightforward. On exams, if you're asked to "find coordinates," check which basis you're working in.
These two properties are the "tests" a set of vectors must pass to qualify as a basis. Span ensures coverage; independence ensures efficiency.
A set of vectors is linearly independent if the only solution to
is . This is the formal test. In practice, you check this by row-reducing the matrix whose columns are the vectors and seeing whether every column has a pivot.
If a set is dependent, at least one vector can be removed without shrinking the span. That redundancy is exactly what a basis avoids: independence guarantees that coordinates are unique.
The span of a set of vectors is the collection of all their linear combinations, written . It's every vector you can "reach" using scalar multiples and addition of those vectors.
A basis must span the entire space. If the span is smaller than the full space, you need more vectors. You can visualize this geometrically: two non-parallel vectors in span a plane through the origin, not the full 3D space. You'd need a third vector (not in that plane) to complete a basis for .
Dimension equals the number of vectors in any basis. This is a theorem, not just a definition. It tells you the degrees of freedom in the space: in , you need exactly 3 coordinates to specify a point, so .
Subspaces have smaller (or equal) dimension. A plane through the origin in is a 2-dimensional subspace. A line through the origin is 1-dimensional. The zero vector alone forms a 0-dimensional subspace.
Compare: Linear independence vs. span: independence prevents "too many" vectors (no redundancy), while span prevents "too few" (full coverage). A basis is the sweet spot where both conditions hold simultaneously. Free-response questions often ask you to verify one or both properties.
Different bases reveal different structure in the same space. Choosing the right basis can dramatically simplify a problem.
A change of basis expresses vectors using different coordinates. The vector itself doesn't change; only its representation does. Think of it like converting between Celsius and Fahrenheit: the temperature is the same, but the number you write down depends on the scale.
If denotes the coordinates of in basis , and you want coordinates in basis , you use a change of basis matrix :
The matrix has the vectors of expressed as columns in terms of . Strategic basis choice simplifies problems: for instance, a matrix that looks dense in the standard basis might become diagonal in an eigenbasis.
An orthonormal basis has two special properties: the vectors are orthogonal (perpendicular to each other) and normalized (each has unit length). Formally, for and .
Why care? Projections become simple dot products. The coordinate of along is just . No matrix inversion needed. Also, the change of basis matrix for an orthonormal basis is orthogonal, meaning . That makes computations fast and numerically stable.
The Gram-Schmidt process converts any basis into an orthonormal one. Here's how it works:
At each step, the span is preserved. The new orthonormal basis spans exactly the same space as the original.
Compare: Arbitrary basis vs. orthonormal basis: both span the same space, but orthonormal bases make projection, coordinate finding, and matrix inversion trivially easy. If an exam problem involves projections or least squares, think Gram-Schmidt.
The power of basis vectors becomes clear when you see how they interact with linear transformations. The basis you choose determines how "nice" your matrix looks.
A matrix's columns show where the basis vectors land under the transformation. If is a transformation matrix (relative to the standard basis), then column is , the image of the -th standard basis vector.
This is powerful because of linearity. Once you know what does to each basis vector, you know what it does to every vector. Any vector maps to . Different bases yield different matrix representations of the same transformation.
Eigenvectors satisfy : the transformation only scales them by the eigenvalue , without changing their direction.
If you can find linearly independent eigenvectors for an matrix, they form an eigenbasis. In this basis, the matrix becomes diagonal, with eigenvalues on the diagonal. That's diagonalization: , where is the matrix of eigenvectors and is the diagonal matrix of eigenvalues.
Why does this matter? Diagonal matrices make powers and exponentials trivial. Computing reduces to , and raising a diagonal matrix to a power just means raising each diagonal entry to that power. This is also central to solving systems of differential equations, where appears.
The choice of basis defines your coordinate system. Cartesian coordinates correspond to the standard basis. Other coordinate systems (polar, cylindrical, etc.) correspond to different basis choices, sometimes ones that vary from point to point.
Non-orthogonal bases are perfectly valid but messier. Coordinates still exist and are still unique, but formulas for length and angle become more complex because you can't just use the dot product in the usual way. Physical applications often suggest natural bases: principal axes of inertia, normal modes of vibration, and similar structures all point toward a basis that simplifies the problem.
Compare: Standard basis vs. eigenbasis: the standard basis is universal and simple, but an eigenbasis is tailored to a specific transformation, making that transformation diagonal. If a problem involves repeated application of a matrix (powers, exponentials), diagonalization via eigenvectors is usually the approach.
| Concept | Key Topics |
|---|---|
| Definition & uniqueness | Basis vectors, Properties of basis vectors |
| Span and coverage | Span of basis vectors, Standard basis vectors |
| Linear independence | Linear independence, Properties of basis vectors |
| Dimension | Dimension of a vector space, Standard basis |
| Orthogonality | Orthonormal basis, Gram-Schmidt process |
| Change of representation | Change of basis, Basis in coordinate systems |
| Transformations | Matrix transformations, Eigenvectors as basis |
| Diagonalization | Eigenvectors as basis, Change of basis |
What two properties must a set of vectors satisfy to be a basis, and why is each property necessary?
If you have 4 vectors in , can they form a basis? Explain using the concept of dimension.
Compare and contrast the standard basis with an eigenbasis for a matrix . When would you prefer each?
Describe how the Gram-Schmidt process transforms a basis. What properties does the output have that the input might lack?
If a matrix acts on a vector , how can you determine by only knowing what does to the basis vectors? Why does this work?