upgrade
upgrade

Linear Algebra and Differential Equations

Matrix Transformations

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Matrix transformations are the bridge between abstract linear algebra and the physical world—they describe how vectors move, stretch, rotate, and project in space. When you're solving systems of differential equations, analyzing stability, or working with change-of-basis problems, you're fundamentally working with transformations. The exam will test whether you understand why a transformation behaves the way it does, not just whether you can multiply matrices.

You're being tested on your ability to recognize transformation types from their matrices, predict geometric effects, compose transformations correctly, and connect eigenvalue analysis to transformation behavior. Don't just memorize the matrix forms—know what each transformation does to the standard basis vectors, how transformations combine, and when a transformation is invertible. Master these concepts, and you'll handle everything from geometric problems to differential equation systems with confidence.


Transformations That Preserve Structure

These transformations maintain key geometric properties like distances, angles, or orientation. Understanding what each transformation preserves versus changes is crucial for exam questions.

Identity Transformation

  • Leaves all vectors unchanged—the "do nothing" transformation that maps every vector to itself
  • Represented by the identity matrix II, where Iv=vI\mathbf{v} = \mathbf{v} for all vectors v\mathbf{v}
  • Serves as the multiplicative identity for matrix composition; essential baseline for understanding inverse transformations

Rotation

  • Rotates vectors around the origin by angle θ\theta while preserving distances and angles between vectors
  • 2D rotation matrix: R(θ)=[cosθsinθsinθcosθ]R(\theta) = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}orthogonal with determinant 1
  • Eigenvalues are complex (e±iθe^{\pm i\theta}), which explains why rotation has no real eigenvectors except at θ=0\theta = 0 or π\pi

Reflection

  • Flips vectors across a line or plane, reversing orientation while preserving distances
  • Reflection matrix is orthogonal with determinant 1-1—this sign change indicates orientation reversal
  • Has eigenvalues 11 and 1-1, corresponding to vectors parallel and perpendicular to the reflection axis

Compare: Rotation vs. Reflection—both are orthogonal transformations (preserve lengths), but rotation has det=1\det = 1 while reflection has det=1\det = -1. If an FRQ asks about orientation-preserving transformations, rotation is your example; for orientation-reversing, use reflection.


Transformations That Change Size or Shape

These transformations alter geometric properties like length, area, or angles. Pay attention to how the determinant relates to area/volume scaling.

Scaling

  • Stretches or compresses vectors along coordinate axes by factors sx,sy,s_x, s_y, \ldots
  • Diagonal scaling matrix: S=[sx00sy]S = \begin{bmatrix} s_x & 0 \\ 0 & s_y \end{bmatrix}eigenvalues equal the scaling factors
  • Determinant equals sxsys_x \cdot s_y, representing the area scale factor; uniform scaling (sx=sys_x = s_y) preserves angles

Shear

  • Distorts shape by sliding layers parallel to an axis while keeping one direction fixed
  • Shear matrix example: [1k01]\begin{bmatrix} 1 & k \\ 0 & 1 \end{bmatrix} shears horizontally by factor kkparallelograms become slanted
  • Determinant equals 1 (area preserved), but angles and distances change; eigenvalue is 1 with algebraic multiplicity 2

Compare: Scaling vs. Shear—scaling changes area (unless det=1\det = 1) and preserves axis directions, while shear preserves area but distorts angles. Both have real eigenvalues, but shear matrices are often defective (not diagonalizable).


Transformations That Reduce Dimension

Projections collapse space onto a subspace, making them essential for least-squares problems and understanding rank.

Projection

  • Maps vectors onto a subspace (line, plane, etc.) by finding the closest point in that subspace
  • Projection matrices satisfy P2=PP^2 = P (idempotent)—applying the projection twice gives the same result
  • Eigenvalues are only 0 and 1; vectors in the target subspace have eigenvalue 1, vectors in the orthogonal complement have eigenvalue 0

Translation (in Homogeneous Coordinates)

  • Shifts all points by a fixed vector without rotation or scaling—not a linear transformation in standard coordinates
  • Requires homogeneous coordinates: [10tx01ty001]\begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \\ 0 & 0 & 1 \end{bmatrix} embeds translation into matrix multiplication
  • Essential for computer graphics where combining translation with other transformations requires a unified matrix framework

Compare: Projection vs. Identity—both have eigenvalue 1, but projection also has eigenvalue 0 (for the nullspace). If a matrix satisfies P2=PP^2 = P and PIP \neq I, it's a projection onto a proper subspace.


Combining and Analyzing Transformations

These concepts let you work with transformations as a system—composing them, reversing them, and understanding their fundamental behavior.

Composition of Transformations

  • Multiply matrices right-to-left: T2T1vT_2 T_1 \mathbf{v} applies T1T_1 first, then T2T_2order matters!
  • Non-commutativity is critical: rotation then scaling ≠ scaling then rotation in general
  • Determinants multiply: det(AB)=det(A)det(B)\det(AB) = \det(A)\det(B), so composed area scale factors combine multiplicatively

Inverse Transformations

  • Reverses the original transformation: T1T=TT1=IT^{-1}T = TT^{-1} = I
  • Exists only when det(T)0\det(T) \neq 0singular matrices destroy information and can't be undone
  • For orthogonal matrices (rotations, reflections), T1=TTT^{-1} = T^T, making inversion computationally simple

Eigenvalue Decomposition

  • Factors a diagonalizable matrix as A=PDP1A = PDP^{-1}, where DD contains eigenvalues and PP contains eigenvectors
  • Reveals transformation geometry: eigenvalues show scaling along eigenvector directions; complex eigenvalues indicate rotation
  • Powers become trivial: An=PDnP1A^n = PD^nP^{-1}, essential for solving systems like x=Ax\mathbf{x}' = A\mathbf{x} in differential equations

Compare: Composition vs. Eigenvalue Decomposition—composition combines different transformations sequentially, while eigenvalue decomposition breaks one transformation into scaling along special directions. For repeated application of the same transformation, eigenvalue decomposition is far more efficient.


Quick Reference Table

ConceptBest Examples
Distance-preserving (orthogonal)Rotation, Reflection, Identity
Orientation-preservingRotation, Scaling (positive det), Identity
Orientation-reversingReflection, Scaling (negative det)
Area-preserving (det=1\det = 1)Rotation, Shear
Dimension-reducingProjection
Requires homogeneous coordinatesTranslation
Always diagonalizableScaling, Projection, Reflection
May be defectiveShear

Self-Check Questions

  1. Which two transformations are orthogonal (preserve distances), and how can you distinguish them using the determinant?

  2. A matrix satisfies P2=PP^2 = P but PIP \neq I. What type of transformation is this, and what are its possible eigenvalues?

  3. Compare and contrast scaling and shear: which preserves area, which preserves axis directions, and how do their eigenvalue structures differ?

  4. If you apply rotation by θ\theta followed by rotation by ϕ\phi, what single transformation results? Does the order matter for rotations about the same axis?

  5. (FRQ-style) Given a transformation matrix with complex eigenvalues λ=a±bi\lambda = a \pm bi where a2+b2=1a^2 + b^2 = 1, explain geometrically what this transformation does to vectors and why it has no real eigenvectors.