Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Linear independence is the foundation for understanding why vector spaces work the way they do. When you're solving systems of differential equations, determining whether a matrix is invertible, or finding the dimension of a solution space, you're really asking questions about linear independence. This concept connects directly to bases, rank, span, and the structure of solution sets.
You need to recognize linear independence in multiple contexts: vectors in , columns of matrices, and solutions to differential equations. Don't just memorize the definition. Understand what independence means geometrically (no redundant directions) and algebraically (the only way to get zero is the trivial combination). Know how to test for it, and know what breaks when vectors become dependent.
Linear independence and dependence are two sides of the same coin. Mastering when and why vectors fall into each category is essential for every topic that follows.
A set of vectors is linearly independent if the only solution to
is the trivial solution .
What this really says: no vector in the set can be written as a linear combination of the others. Each vector contributes a genuinely new "direction" to the set. This property determines the efficiency of a spanning set, since independent vectors carry no redundancy.
A set is linearly dependent if there exist scalars , not all zero, such that . That nontrivial solution means you can rearrange to express at least one vector as a combination of the others.
Any set containing the zero vector is automatically dependent. Why? Suppose . Then is a nontrivial solution (the coefficient 5 is nonzero). You don't even need to row reduce.
How to test independence in practice: form a matrix with the vectors as columns, row reduce to echelon form, and count pivots. A pivot in every column means no free variables, which means only the trivial solution exists. That's independence.
Compare: Linear independence vs. linear dependence. Both describe relationships among vectors, but independence means no redundancy while dependence means at least one vector is expressible from the others. Exam problems often ask you to determine which case applies and explain the geometric or algebraic consequence.
Linear independence determines how efficiently you can describe a vector space. It's the bridge between individual vectors and the global properties of span, basis, and dimension.
The span of a set of vectors is the collection of all possible linear combinations of those vectors. Think of it as "everywhere you can reach" using those vectors with any scalar weights.
For example, in , two independent vectors span a plane. Adding a third vector that already lies in that plane (dependent on the first two) doesn't expand the span beyond that plane.
A basis is a linearly independent set that also spans the entire space. It's the minimal complete description of a vector space.
At most vectors can be linearly independent in . You can't have more independent directions than the space has dimensions.
The standard basis is the classic example: each vector has a 1 in exactly one coordinate and 0s elsewhere. These clearly point in completely different directions with no redundancy.
Geometrically, two independent vectors in are any two vectors that don't lie on the same line through the origin. Three independent vectors in don't all lie in the same plane.
Compare: Span vs. basis. Span tells you what you can reach, while a basis tells you the most efficient way to reach it. If a problem asks for a basis, you need both independence AND spanning. Checking just one isn't enough.
The matrix perspective transforms abstract independence questions into concrete computational procedures. Row reduction is your primary tool.
The rank of a matrix is the number of pivot positions after row reduction. This equals the maximum number of linearly independent columns (or rows).
The independence of a matrix's columns directly controls the solution behavior of :
The Rank-Nullity Theorem ties this together:
where is the number of columns. This is one of the most useful relationships in the course.
Compare: Full rank vs. rank-deficient matrices. Full rank means independent columns and a trivial null space. Rank deficiency signals dependence and a nontrivial null space. Always connect rank to the dimension of the null space via .
When your "vectors" are functions, you need a specialized test. The Wronskian determinant extends linear independence to solution spaces of differential equations.
For functions , the Wronskian is the determinant of the matrix whose rows are the functions and their successive derivatives (up to order ):
The key rules for interpreting it:
There's an important exception: for functions that are solutions to the same linear homogeneous DE, the situation is cleaner. Abel's theorem tells you the Wronskian is either identically zero or never zero on the interval. So for DE solutions specifically, at one point means everywhere, which means the solutions are dependent.
Solutions to an th-order linear homogeneous DE form an -dimensional vector space. To write the general solution, you need exactly linearly independent solutions (called a fundamental set).
The Wronskian confirms you have such a set. Without verifying independence, your "general solution" might be missing solutions because some of your candidates are redundant combinations of others.
Compare: Testing independence in vs. function spaces. Both ask "is the only zero-combination the trivial one?" but the methods differ. For vectors, row reduce. For functions, compute the Wronskian. DE problems frequently require you to verify independence before writing a general solution.
| Concept | What to Know |
|---|---|
| Definition of independence | Trivial solution test: only if all |
| Testing in | Row reduce the matrix of columns; pivot in every column = independent |
| Span and efficiency | Dependent vectors don't expand span; independent vectors do |
| Basis properties | Independent + spanning; unique representations; size = dimension |
| Matrix rank | Number of pivots = number of independent columns |
| System solutions | Independent columns โ unique homogeneous solution; dependent โ nontrivial null space |
| Rank-Nullity | |
| Wronskian test | Determinant of functions/derivatives; nonzero at one point โ independent |
| Function spaces | Fundamental solution sets for th-order linear DEs need independent solutions |
If a set of four vectors in is given, what can you immediately conclude about their linear independence, and why?
Compare how you would test for linear independence of three vectors in versus three functions that are solutions to a third-order linear DE.
A matrix has rank 3 and 5 columns. What does this tell you about the linear independence of the columns, and what is the dimension of the null space?
Why does including the zero vector in a set automatically make it linearly dependent? Connect this to the definition involving scalar coefficients.
If the Wronskian of two functions equals zero at one point but you haven't checked elsewhere, can you conclude the functions are dependent? What changes if you know the functions are both solutions to the same second-order linear homogeneous DE?