โž—Linear Algebra and Differential Equations

Matrix Transformations

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Matrix transformations describe how vectors move, stretch, rotate, and project in space. They connect abstract linear algebra to physical and geometric intuition, and they show up constantly when you're solving systems of differential equations, analyzing stability, or working with change-of-basis problems.

To do well on exams, you need to recognize transformation types from their matrices, predict geometric effects, compose transformations correctly, and connect eigenvalue analysis to transformation behavior. Don't just memorize the matrix forms. Know what each transformation does to the standard basis vectors, how transformations combine, and when a transformation is invertible.


Transformations That Preserve Structure

These transformations maintain key geometric properties like distances, angles, or orientation. Understanding what each one preserves versus changes is the key to telling them apart.

Identity Transformation

  • Leaves all vectors unchanged. It maps every vector to itself: Iv=vI\mathbf{v} = \mathbf{v} for all v\mathbf{v}
  • The identity matrix II is the multiplicative identity for matrix composition, meaning AI=IA=AAI = IA = A for any matrix AA
  • It serves as the baseline for understanding inverse transformations: if Tโˆ’1T=IT^{-1}T = I, you've "undone" TT

Rotation

  • Rotates vectors around the origin by angle ฮธ\theta, preserving both distances and angles between vectors
  • The 2D rotation matrix is:

R(ฮธ)=[cosโกฮธโˆ’sinโกฮธsinโกฮธcosโกฮธ]R(\theta) = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}

This matrix is orthogonal with determinant 1.

  • Eigenvalues are complex: eยฑiฮธe^{\pm i\theta}. This is why rotation has no real eigenvectors in general. No nonzero vector points in the same direction after being rotated (unless ฮธ=0\theta = 0 or ฮธ=ฯ€\theta = \pi).

Reflection

  • Flips vectors across a line (2D) or plane (3D), reversing orientation while preserving distances
  • The reflection matrix is orthogonal with determinant โˆ’1-1. That negative sign is what tells you orientation has been reversed.
  • Eigenvalues are 11 and โˆ’1-1. Vectors along the reflection axis are unchanged (eigenvalue 1), while vectors perpendicular to it get flipped (eigenvalue โˆ’1-1).

Compare: Rotation vs. Reflection: both are orthogonal transformations (preserve lengths), but rotation has detโก=1\det = 1 while reflection has detโก=โˆ’1\det = -1. Rotation preserves orientation; reflection reverses it.


Transformations That Change Size or Shape

These transformations alter geometric properties like length, area, or angles. The determinant tells you exactly how area (2D) or volume (3D) scales.

Scaling

  • Stretches or compresses vectors along coordinate axes by factors sx,sy,โ€ฆs_x, s_y, \ldots
  • The diagonal scaling matrix looks like:

S=[sx00sy]S = \begin{bmatrix} s_x & 0 \\ 0 & s_y \end{bmatrix}

The eigenvalues are the scaling factors, and the eigenvectors point along the coordinate axes.

  • Determinant equals sxโ‹…sys_x \cdot s_y, which gives the area scale factor. When sx=sys_x = s_y (uniform scaling), angles are preserved and every direction scales equally.

Shear

  • Distorts shape by sliding layers parallel to an axis while keeping one direction fixed
  • A horizontal shear by factor kk:

[1k01]\begin{bmatrix} 1 & k \\ 0 & 1 \end{bmatrix}

This turns rectangles into parallelograms. The xx-component of a vector gets shifted by kk times its yy-component, while the yy-component stays the same.

  • Determinant equals 1 (area is preserved), but angles and distances change. The eigenvalue is 1 with algebraic multiplicity 2, yet there's only one linearly independent eigenvector. This makes shear matrices defective (not diagonalizable).

Compare: Scaling vs. Shear: scaling changes area (unless detโก=1\det = 1) and preserves axis directions, while shear preserves area but distorts angles. Both have real eigenvalues, but shear matrices are often defective.


Transformations That Reduce Dimension

Projections collapse space onto a subspace. They're essential for least-squares problems and for understanding rank.

Projection

  • Maps vectors onto a subspace (line, plane, etc.) by finding the closest point in that subspace
  • Projection matrices satisfy P2=PP^2 = P (this property is called idempotent). Applying the projection twice gives the same result as applying it once, because the vector is already in the subspace after the first application.
  • Eigenvalues are only 0 and 1. Vectors already in the target subspace have eigenvalue 1 (they don't move). Vectors in the orthogonal complement have eigenvalue 0 (they get sent to the zero vector).

Translation (in Homogeneous Coordinates)

  • Shifts all points by a fixed vector without rotation or scaling. Translation is not a linear transformation in standard coordinates because it doesn't map the origin to itself.
  • To represent translation as matrix multiplication, you use homogeneous coordinates, which add an extra coordinate (always set to 1) to each vector:

[10tx01ty001]\begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \\ 0 & 0 & 1 \end{bmatrix}

This shifts every point by (tx,ty)(t_x, t_y).

  • Homogeneous coordinates let you combine translation with rotation, scaling, and other transformations in a single matrix framework. This is standard practice in computer graphics.

Compare: Projection vs. Identity: both have eigenvalue 1, but projection also has eigenvalue 0 (for vectors in the nullspace). If a matrix satisfies P2=PP^2 = P and Pโ‰ IP \neq I, it's a projection onto a proper subspace.


Combining and Analyzing Transformations

These concepts let you work with transformations as a system: composing them, reversing them, and understanding their fundamental behavior.

Composition of Transformations

  • Multiply matrices right-to-left: T2T1vT_2 T_1 \mathbf{v} applies T1T_1 first, then T2T_2. Order matters.
  • Non-commutativity is critical. In general, rotation then scaling โ‰ \neq scaling then rotation. Always check which transformation acts first.
  • Determinants multiply under composition: detโก(AB)=detโก(A)detโก(B)\det(AB) = \det(A)\det(B). So area scale factors combine multiplicatively when you compose transformations.

Inverse Transformations

  • Reverses the original transformation: Tโˆ’1T=TTโˆ’1=IT^{-1}T = TT^{-1} = I
  • An inverse exists only when detโก(T)โ‰ 0\det(T) \neq 0. Singular matrices (detโก=0\det = 0) collapse some dimension of the space, destroying information that can't be recovered.
  • For orthogonal matrices (rotations, reflections), the inverse is just the transpose: Tโˆ’1=TTT^{-1} = T^T. This makes inversion computationally cheap.

Eigenvalue Decomposition

  • Factors a diagonalizable matrix as A=PDPโˆ’1A = PDP^{-1}, where DD is a diagonal matrix of eigenvalues and PP has the corresponding eigenvectors as columns
  • This decomposition reveals the geometry of the transformation: eigenvalues show how much the transformation scales along each eigenvector direction, and complex eigenvalues indicate rotation.
  • Powers become trivial: An=PDnPโˆ’1A^n = PD^nP^{-1}. You just raise each diagonal entry to the nnth power. This is essential for solving systems like xโ€ฒ=Ax\mathbf{x}' = A\mathbf{x} in differential equations, where the solution involves eAte^{At}.

Compare: Composition vs. Eigenvalue Decomposition: composition combines different transformations sequentially, while eigenvalue decomposition breaks one transformation into scaling along special directions. For repeated application of the same transformation, eigenvalue decomposition is far more efficient.


Quick Reference Table

ConceptBest Examples
Distance-preserving (orthogonal)Rotation, Reflection, Identity
Orientation-preservingRotation, Scaling (positive det), Identity
Orientation-reversingReflection, Scaling (negative det)
Area-preserving (detโก=1\det = 1)Rotation, Shear
Dimension-reducingProjection
Requires homogeneous coordinatesTranslation
Always diagonalizableScaling, Projection, Reflection
May be defectiveShear

Self-Check Questions

  1. Which two transformations are orthogonal (preserve distances), and how can you distinguish them using the determinant?

  2. A matrix satisfies P2=PP^2 = P but Pโ‰ IP \neq I. What type of transformation is this, and what are its possible eigenvalues?

  3. Compare scaling and shear: which preserves area, which preserves axis directions, and how do their eigenvalue structures differ?

  4. If you apply rotation by ฮธ\theta followed by rotation by ฯ•\phi, what single transformation results? Does the order matter for rotations about the same axis?

  5. Given a transformation matrix with complex eigenvalues ฮป=aยฑbi\lambda = a \pm bi where a2+b2=1a^2 + b^2 = 1, explain geometrically what this transformation does to vectors and why it has no real eigenvectors.