โž—Linear Algebra and Differential Equations

Key Concepts of Linear Independence

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Linear independence is the foundation for understanding why vector spaces work the way they do. When you're solving systems of differential equations, determining whether a matrix is invertible, or finding the dimension of a solution space, you're really asking questions about linear independence. This concept connects directly to bases, rank, span, and the structure of solution sets.

You need to recognize linear independence in multiple contexts: vectors in Rn\mathbb{R}^n, columns of matrices, and solutions to differential equations. Don't just memorize the definition. Understand what independence means geometrically (no redundant directions) and algebraically (the only way to get zero is the trivial combination). Know how to test for it, and know what breaks when vectors become dependent.


The Core Definition and Its Opposite

Linear independence and dependence are two sides of the same coin. Mastering when and why vectors fall into each category is essential for every topic that follows.

Definition of Linear Independence

A set of vectors {v1,v2,โ€ฆ,vn}\{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n\} is linearly independent if the only solution to

c1v1+c2v2+โ‹ฏ+cnvn=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n = \mathbf{0}

is the trivial solution c1=c2=โ‹ฏ=cn=0c_1 = c_2 = \cdots = c_n = 0.

What this really says: no vector in the set can be written as a linear combination of the others. Each vector contributes a genuinely new "direction" to the set. This property determines the efficiency of a spanning set, since independent vectors carry no redundancy.

Linear Dependence: The Failure Case

A set is linearly dependent if there exist scalars c1,c2,โ€ฆ,cnc_1, c_2, \ldots, c_n, not all zero, such that c1v1+c2v2+โ‹ฏ+cnvn=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n = \mathbf{0}. That nontrivial solution means you can rearrange to express at least one vector as a combination of the others.

  • Dependent vectors contain redundancy: removing one doesn't shrink the span
  • Dimension count shortcut: if you have more vectors than the dimension of your ambient space, they must be dependent (e.g., any 4 vectors in R3\mathbb{R}^3)

The Zero Vector Test

Any set containing the zero vector is automatically dependent. Why? Suppose v1=0\mathbf{v}_1 = \mathbf{0}. Then 5v1+0v2+โ‹ฏ+0vn=05\mathbf{v}_1 + 0\mathbf{v}_2 + \cdots + 0\mathbf{v}_n = \mathbf{0} is a nontrivial solution (the coefficient 5 is nonzero). You don't even need to row reduce.

How to test independence in practice: form a matrix with the vectors as columns, row reduce to echelon form, and count pivots. A pivot in every column means no free variables, which means only the trivial solution exists. That's independence.

Compare: Linear independence vs. linear dependence. Both describe relationships among vectors, but independence means no redundancy while dependence means at least one vector is expressible from the others. Exam problems often ask you to determine which case applies and explain the geometric or algebraic consequence.


Connecting Independence to Vector Space Structure

Linear independence determines how efficiently you can describe a vector space. It's the bridge between individual vectors and the global properties of span, basis, and dimension.

Span and Linear Independence

The span of a set of vectors is the collection of all possible linear combinations of those vectors. Think of it as "everywhere you can reach" using those vectors with any scalar weights.

  • Independent vectors maximize spanning efficiency: each one expands the span into a genuinely new dimension
  • Dependent vectors don't add new reach: the span stays the same size even if you toss in extra vectors

For example, in R3\mathbb{R}^3, two independent vectors span a plane. Adding a third vector that already lies in that plane (dependent on the first two) doesn't expand the span beyond that plane.

Basis: Independence Meets Span

A basis is a linearly independent set that also spans the entire space. It's the minimal complete description of a vector space.

  • The number of vectors in any basis equals the dimension of the space. This count is the same no matter which basis you choose, which is why dimension is well-defined.
  • Every vector in the space has a unique representation as a linear combination of basis vectors. This uniqueness comes directly from independence. If the set were dependent, you could write the same vector in multiple ways.

Linear Independence in Rn\mathbb{R}^n

At most nn vectors can be linearly independent in Rn\mathbb{R}^n. You can't have more independent directions than the space has dimensions.

The standard basis {e1,e2,โ€ฆ,en}\{\mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_n\} is the classic example: each vector has a 1 in exactly one coordinate and 0s elsewhere. These clearly point in completely different directions with no redundancy.

Geometrically, two independent vectors in R2\mathbb{R}^2 are any two vectors that don't lie on the same line through the origin. Three independent vectors in R3\mathbb{R}^3 don't all lie in the same plane.

Compare: Span vs. basis. Span tells you what you can reach, while a basis tells you the most efficient way to reach it. If a problem asks for a basis, you need both independence AND spanning. Checking just one isn't enough.


Testing Independence: Matrices and Rank

The matrix perspective transforms abstract independence questions into concrete computational procedures. Row reduction is your primary tool.

Matrix Rank and Column Independence

The rank of a matrix is the number of pivot positions after row reduction. This equals the maximum number of linearly independent columns (or rows).

  • Full column rank means every column contains a pivot, so all columns are independent. For an mร—nm \times n matrix, full column rank means rank=n\text{rank} = n.
  • Rank also equals the dimension of the column space (the span of the columns).

Solving Systems with Independence

The independence of a matrix's columns directly controls the solution behavior of Ax=bA\mathbf{x} = \mathbf{b}:

  1. Unique solution to Ax=0A\mathbf{x} = \mathbf{0}: The columns are independent (full column rank). The only homogeneous solution is x=0\mathbf{x} = \mathbf{0}. If b\mathbf{b} is in the column space, there's exactly one solution.
  2. Infinitely many solutions to Ax=0A\mathbf{x} = \mathbf{0}: The columns are dependent. Free variables appear, and the null space has dimension nโˆ’rank(A)n - \text{rank}(A).
  3. No solution: This happens when b\mathbf{b} is not in the column space. Column independence alone doesn't guarantee consistency.

The Rank-Nullity Theorem ties this together:

rank(A)+dimโก(Nullย A)=n\text{rank}(A) + \dim(\text{Null } A) = n

where nn is the number of columns. This is one of the most useful relationships in the course.

Compare: Full rank vs. rank-deficient matrices. Full rank means independent columns and a trivial null space. Rank deficiency signals dependence and a nontrivial null space. Always connect rank to the dimension of the null space via dimโก(Nullย A)=nโˆ’rank(A)\dim(\text{Null } A) = n - \text{rank}(A).


Independence in Differential Equations: The Wronskian

When your "vectors" are functions, you need a specialized test. The Wronskian determinant extends linear independence to solution spaces of differential equations.

The Wronskian Determinant

For nn functions f1,f2,โ€ฆ,fnf_1, f_2, \ldots, f_n, the Wronskian is the determinant of the matrix whose rows are the functions and their successive derivatives (up to order nโˆ’1n-1):

W(f1,f2,โ€ฆ,fn)=โˆฃf1f2โ‹ฏfnf1โ€ฒf2โ€ฒโ‹ฏfnโ€ฒโ‹ฎโ‹ฎโ‹ฑโ‹ฎf1(nโˆ’1)f2(nโˆ’1)โ‹ฏfn(nโˆ’1)โˆฃW(f_1, f_2, \ldots, f_n) = \begin{vmatrix} f_1 & f_2 & \cdots & f_n \\ f_1' & f_2' & \cdots & f_n' \\ \vdots & \vdots & \ddots & \vdots \\ f_1^{(n-1)} & f_2^{(n-1)} & \cdots & f_n^{(n-1)} \end{vmatrix}

The key rules for interpreting it:

  • If Wโ‰ 0W \neq 0 at any single point in the interval, the functions are linearly independent.
  • If W=0W = 0 everywhere on the interval, the functions might be dependent, but this alone doesn't prove it. For arbitrary functions, W=0W = 0 everywhere is necessary but not sufficient for dependence.

There's an important exception: for functions that are solutions to the same linear homogeneous DE, the situation is cleaner. Abel's theorem tells you the Wronskian is either identically zero or never zero on the interval. So for DE solutions specifically, W=0W = 0 at one point means W=0W = 0 everywhere, which means the solutions are dependent.

Why the Wronskian Matters for DEs

Solutions to an nnth-order linear homogeneous DE form an nn-dimensional vector space. To write the general solution, you need exactly nn linearly independent solutions (called a fundamental set).

The Wronskian confirms you have such a set. Without verifying independence, your "general solution" might be missing solutions because some of your candidates are redundant combinations of others.

Compare: Testing independence in Rn\mathbb{R}^n vs. function spaces. Both ask "is the only zero-combination the trivial one?" but the methods differ. For vectors, row reduce. For functions, compute the Wronskian. DE problems frequently require you to verify independence before writing a general solution.


Quick Reference Table

ConceptWhat to Know
Definition of independenceTrivial solution test: c1v1+โ‹ฏ+cnvn=0c_1\mathbf{v}_1 + \cdots + c_n\mathbf{v}_n = \mathbf{0} only if all ci=0c_i = 0
Testing in Rn\mathbb{R}^nRow reduce the matrix of columns; pivot in every column = independent
Span and efficiencyDependent vectors don't expand span; independent vectors do
Basis propertiesIndependent + spanning; unique representations; size = dimension
Matrix rankNumber of pivots = number of independent columns
System solutionsIndependent columns โ†’ unique homogeneous solution; dependent โ†’ nontrivial null space
Rank-Nullityrank(A)+dimโก(Nullย A)=n\text{rank}(A) + \dim(\text{Null } A) = n
Wronskian testDeterminant of functions/derivatives; nonzero at one point โ†’ independent
Function spacesFundamental solution sets for nnth-order linear DEs need nn independent solutions

Self-Check Questions

  1. If a set of four vectors in R3\mathbb{R}^3 is given, what can you immediately conclude about their linear independence, and why?

  2. Compare how you would test for linear independence of three vectors in R4\mathbb{R}^4 versus three functions that are solutions to a third-order linear DE.

  3. A matrix AA has rank 3 and 5 columns. What does this tell you about the linear independence of the columns, and what is the dimension of the null space?

  4. Why does including the zero vector in a set automatically make it linearly dependent? Connect this to the definition involving scalar coefficients.

  5. If the Wronskian of two functions equals zero at one point but you haven't checked elsewhere, can you conclude the functions are dependent? What changes if you know the functions are both solutions to the same second-order linear homogeneous DE?