upgrade
upgrade

Key Concepts of Linear Transformations

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Linear transformations are the backbone of linear algebra—they're the functions that actually do something to vectors while respecting the structure of vector spaces. When you're tested on this material, you're not just being asked to recall definitions. You're being evaluated on whether you understand how transformations behave, why certain properties guarantee invertibility, and how matrix operations connect to geometric intuition. Every concept here—from kernels to eigenvalues—builds toward solving systems, analyzing stability, and working in different coordinate systems.

Don't just memorize that "the kernel is the set of vectors mapping to zero." Know why that matters: it tells you about information loss, injectivity, and the dimension of your solution space. Understand how composition corresponds to matrix multiplication, and when a transformation can be reversed. These connections between algebraic properties and geometric meaning are exactly what exam questions target—especially in proofs and applications.


Foundational Definitions and Structure

Linear transformations must satisfy two critical properties that preserve the algebraic structure of vector spaces. These properties—additivity and homogeneity—are what distinguish linear maps from arbitrary functions.

Definition of a Linear Transformation

  • Preserves vector addition and scalar multiplication—a function T:VWT: V \to W is linear if T(x+y)=T(x)+T(y)T(\mathbf{x} + \mathbf{y}) = T(\mathbf{x}) + T(\mathbf{y}) for all vectors
  • Homogeneity condition requires T(cx)=cT(x)T(c\mathbf{x}) = cT(\mathbf{x}) for any scalar cc, meaning scaling before or after transformation gives the same result
  • Zero vector always maps to zero—this is a direct consequence of linearity and serves as a quick check for whether a function could be linear

Matrix Representation of Linear Transformations

  • Every linear transformation has a matrix representation once you fix bases for the domain and codomain vector spaces
  • Matrix-vector multiplication computes the transformation: if AA represents TT, then T(x)=AxT(\mathbf{x}) = A\mathbf{x}
  • Basis choice matters—the same transformation has different matrix representations in different bases, which is why change of basis becomes important

Compare: Definition vs. Matrix Representation—the definition gives you the abstract properties to verify, while the matrix gives you a computational tool. On proofs, use the definition; for calculations, use the matrix.


Kernel, Image, and Injectivity/Surjectivity

These concepts describe what a transformation does to the vector space—what it collapses, what it reaches, and whether information is preserved or lost. The Rank-Nullity Theorem connects these ideas quantitatively.

Kernel (Null Space) and Image (Range)

  • Kernel ker(T)={vV:T(v)=0}\ker(T) = \{\mathbf{v} \in V : T(\mathbf{v}) = \mathbf{0}\}—represents the transformation's "information loss" and is always a subspace
  • Image Im(T)={T(v):vV}\text{Im}(T) = \{T(\mathbf{v}) : \mathbf{v} \in V\}—the transformation's "reach" in the codomain, also a subspace
  • Rank-Nullity Theorem states dim(kerT)+dim(Im T)=dim(V)\dim(\ker T) + \dim(\text{Im } T) = \dim(V), connecting these two fundamental subspaces

One-to-One and Onto Linear Transformations

  • Injective (one-to-one) means ker(T)={0}\ker(T) = \{\mathbf{0}\}—no two different inputs produce the same output
  • Surjective (onto) means Im(T)=W\text{Im}(T) = W—every vector in the codomain is reachable
  • Bijective transformations are both injective and surjective, which is precisely when an inverse exists

Compare: Kernel vs. Image—kernel measures what's "lost" (dimension = nullity), image measures what's "reached" (dimension = rank). If an FRQ asks about invertibility, check both: trivial kernel AND full image.


Composition and Inverses

When you chain transformations together or undo them, the algebra of matrices mirrors the algebra of functions. Matrix multiplication corresponds to function composition, and matrix inversion corresponds to function inversion.

Composition of Linear Transformations

  • Composition preserves linearity—if T1T_1 and T2T_2 are linear, then (T2T1)(x)=T2(T1(x))(T_2 \circ T_1)(\mathbf{x}) = T_2(T_1(\mathbf{x})) is also linear
  • Matrix product represents composition—if AA represents T1T_1 and BB represents T2T_2, then BABA represents T2T1T_2 \circ T_1 (note the order!)
  • Order matters because matrix multiplication is not commutative; T2T1T1T2T_2 \circ T_1 \neq T_1 \circ T_2 in general

Inverse Linear Transformations

  • Inverse undoes the transformationT1(T(x))=xT^{-1}(T(\mathbf{x})) = \mathbf{x} and T(T1(y))=yT(T^{-1}(\mathbf{y})) = \mathbf{y}
  • Exists if and only if TT is bijective (both injective and surjective), which for square matrices means det(A)0\det(A) \neq 0
  • Inverse matrix A1A^{-1} satisfies A1A=AA1=IA^{-1}A = AA^{-1} = I, and can be computed via row reduction or the adjugate formula

Compare: Composition vs. Inverse—composition builds complexity (multiply matrices), inverse removes it (find A1A^{-1}). Remember: (AB)1=B1A1(AB)^{-1} = B^{-1}A^{-1}—the order reverses!


Eigenvalues and Eigenvectors

Eigenvectors reveal the "natural directions" of a transformation—directions that don't rotate, only stretch or compress. This spectral information is crucial for understanding long-term behavior of iterated transformations and solving differential equations.

Eigenvalues and Eigenvectors of Linear Transformations

  • Eigenvector definition—a non-zero vector v\mathbf{v} satisfying T(v)=λvT(\mathbf{v}) = \lambda \mathbf{v}, meaning the transformation only scales it
  • Eigenvalue λ\lambda is the scaling factor; found by solving det(AλI)=0\det(A - \lambda I) = 0 (the characteristic equation)
  • Applications include stability analysis, diagonalization, solving systems of differential equations, and principal component analysis

Compare: Eigenvalues vs. Eigenvectors—eigenvalues tell you how much scaling occurs, eigenvectors tell you in which directions. A transformation can have the same eigenvalue for multiple independent eigenvectors (eigenspaces).


Geometric Transformations

These concrete examples illustrate how abstract linear transformations manifest geometrically. Each type has a characteristic matrix form that you should recognize.

Rotation and Reflection Transformations

  • Rotation by angle θ\theta in R2\mathbb{R}^2 uses matrix (cosθsinθsinθcosθ)\begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}—preserves lengths and angles
  • Reflection over a line preserves distances but reverses orientation; determinant equals 1-1
  • Both are orthogonal transformations—their matrices satisfy ATA=IA^T A = I, meaning A1=ATA^{-1} = A^T

Scaling and Shear Transformations

  • Scaling matrices are diagonal—entries determine stretch factors along coordinate axes; (k100k2)\begin{pmatrix} k_1 & 0 \\ 0 & k_2 \end{pmatrix} scales by k1k_1 horizontally and k2k_2 vertically
  • Shear matrices are triangular—they distort shapes by sliding layers; (1k01)\begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix} is a horizontal shear
  • Shears preserve area (determinant = 1) but change angles; scaling changes area by factor k1k2|k_1 k_2|

Compare: Rotation vs. Shear—both can be represented by matrices with 1s on the diagonal (for unit rotations/shears), but rotation preserves angles while shear distorts them. Check the determinant: rotation has det=1\det = 1, reflection has det=1\det = -1.


Change of Basis

Different bases can dramatically simplify how a transformation looks. The goal is often to find a basis of eigenvectors, making the matrix diagonal.

Change of Basis for Linear Transformations

  • Same transformation, different matrix—if PP is the change of basis matrix, the new representation is P1APP^{-1}AP
  • Similar matrices represent the same transformation in different bases; they share eigenvalues, determinant, and trace
  • Diagonalization occurs when you can find a basis of eigenvectors, yielding P1AP=DP^{-1}AP = D where DD is diagonal

Compare: Original basis vs. Eigenbasis—in the standard basis, a transformation might look complicated; in an eigenbasis (if one exists), it becomes diagonal and trivial to analyze. This is why eigenvalues matter so much!


Quick Reference Table

ConceptKey Facts to Remember
Linearity ConditionsT(x+y)=T(x)+T(y)T(\mathbf{x}+\mathbf{y}) = T(\mathbf{x})+T(\mathbf{y}), T(cx)=cT(x)T(c\mathbf{x}) = cT(\mathbf{x}), T(0)=0T(\mathbf{0}) = \mathbf{0}
Kernel & Injectivityker(T)={0}\ker(T) = \{\mathbf{0}\} iff TT is injective
Image & SurjectivityIm(T)=W\text{Im}(T) = W iff TT is surjective
InvertibilityRequires bijection; det(A)0\det(A) \neq 0 for square matrices
Composition(T2T1)BA(T_2 \circ T_1) \leftrightarrow BA (order reverses!)
Eigenvalue EquationAv=λvA\mathbf{v} = \lambda\mathbf{v}, solve det(AλI)=0\det(A - \lambda I) = 0
Orthogonal TransformationsRotations, reflections; ATA=IA^T A = I
Change of BasisSimilar matrices: P1APP^{-1}AP, same eigenvalues

Self-Check Questions

  1. If T:R3R3T: \mathbb{R}^3 \to \mathbb{R}^3 has a two-dimensional kernel, what can you conclude about its image dimension and whether TT is invertible?

  2. Compare and contrast rotation and reflection transformations in R2\mathbb{R}^2: what properties do they share, and how do their determinants differ?

  3. Given that (AB)1=B1A1(AB)^{-1} = B^{-1}A^{-1}, explain why the order reverses by thinking about composition of transformations.

  4. A transformation has eigenvalues λ1=2\lambda_1 = 2 and λ2=1\lambda_2 = -1. What happens to vectors along each eigendirection after applying the transformation twice?

  5. Why does changing the basis change the matrix representation but not the kernel dimension or eigenvalues? Connect this to the concept of similar matrices.