Matrix transformations describe how vectors move, stretch, rotate, and project in space. They connect abstract linear algebra to physical and geometric intuition, and they show up constantly when you're solving systems of differential equations, analyzing stability, or working with change-of-basis problems.
To do well on exams, you need to recognize transformation types from their matrices, predict geometric effects, compose transformations correctly, and connect eigenvalue analysis to transformation behavior. Don't just memorize the matrix forms. Know what each transformation does to the standard basis vectors, how transformations combine, and when a transformation is invertible.
Transformations That Preserve Structure
These transformations maintain key geometric properties like distances, angles, or orientation. Understanding what each one preserves versus changes is the key to telling them apart.
Identity Transformation
Leaves all vectors unchanged. It maps every vector to itself: Iv=v for all v
The identity matrix I is the multiplicative identity for matrix composition, meaning AI=IA=A for any matrix A
It serves as the baseline for understanding inverse transformations: if Tโ1T=I, you've "undone" T
Rotation
Rotates vectors around the origin by angle ฮธ, preserving both distances and angles between vectors
The 2D rotation matrix is:
R(ฮธ)=[cosฮธsinฮธโโsinฮธcosฮธโ]
This matrix is orthogonal with determinant 1.
Eigenvalues are complex:eยฑiฮธ. This is why rotation has no real eigenvectors in general. No nonzero vector points in the same direction after being rotated (unless ฮธ=0 or ฮธ=ฯ).
Reflection
Flips vectors across a line (2D) or plane (3D), reversing orientation while preserving distances
The reflection matrix is orthogonal with determinant โ1. That negative sign is what tells you orientation has been reversed.
Eigenvalues are1andโ1. Vectors along the reflection axis are unchanged (eigenvalue 1), while vectors perpendicular to it get flipped (eigenvalue โ1).
Compare: Rotation vs. Reflection: both are orthogonal transformations (preserve lengths), but rotation has det=1 while reflection has det=โ1. Rotation preserves orientation; reflection reverses it.
Transformations That Change Size or Shape
These transformations alter geometric properties like length, area, or angles. The determinant tells you exactly how area (2D) or volume (3D) scales.
Scaling
Stretches or compresses vectors along coordinate axes by factors sxโ,syโ,โฆ
The diagonal scaling matrix looks like:
S=[sxโ0โ0syโโ]
The eigenvalues are the scaling factors, and the eigenvectors point along the coordinate axes.
Determinant equalssxโโ syโ, which gives the area scale factor. When sxโ=syโ (uniform scaling), angles are preserved and every direction scales equally.
Shear
Distorts shape by sliding layers parallel to an axis while keeping one direction fixed
A horizontal shear by factor k:
[10โk1โ]
This turns rectangles into parallelograms. The x-component of a vector gets shifted by k times its y-component, while the y-component stays the same.
Determinant equals 1 (area is preserved), but angles and distances change. The eigenvalue is 1 with algebraic multiplicity 2, yet there's only one linearly independent eigenvector. This makes shear matrices defective (not diagonalizable).
Compare: Scaling vs. Shear: scaling changes area (unless det=1) and preserves axis directions, while shear preserves area but distorts angles. Both have real eigenvalues, but shear matrices are often defective.
Transformations That Reduce Dimension
Projections collapse space onto a subspace. They're essential for least-squares problems and for understanding rank.
Projection
Maps vectors onto a subspace (line, plane, etc.) by finding the closest point in that subspace
Projection matrices satisfy P2=P (this property is called idempotent). Applying the projection twice gives the same result as applying it once, because the vector is already in the subspace after the first application.
Eigenvalues are only 0 and 1. Vectors already in the target subspace have eigenvalue 1 (they don't move). Vectors in the orthogonal complement have eigenvalue 0 (they get sent to the zero vector).
Translation (in Homogeneous Coordinates)
Shifts all points by a fixed vector without rotation or scaling. Translation is not a linear transformation in standard coordinates because it doesn't map the origin to itself.
To represent translation as matrix multiplication, you use homogeneous coordinates, which add an extra coordinate (always set to 1) to each vector:
โ100โ010โtxโtyโ1โโ
This shifts every point by (txโ,tyโ).
Homogeneous coordinates let you combine translation with rotation, scaling, and other transformations in a single matrix framework. This is standard practice in computer graphics.
Compare: Projection vs. Identity: both have eigenvalue 1, but projection also has eigenvalue 0 (for vectors in the nullspace). If a matrix satisfies P2=P and P๎ =I, it's a projection onto a proper subspace.
Combining and Analyzing Transformations
These concepts let you work with transformations as a system: composing them, reversing them, and understanding their fundamental behavior.
Composition of Transformations
Multiply matrices right-to-left:T2โT1โv applies T1โ first, then T2โ. Order matters.
Non-commutativity is critical. In general, rotation then scaling ๎ = scaling then rotation. Always check which transformation acts first.
Determinants multiply under composition:det(AB)=det(A)det(B). So area scale factors combine multiplicatively when you compose transformations.
Inverse Transformations
Reverses the original transformation:Tโ1T=TTโ1=I
An inverse exists only when det(T)๎ =0. Singular matrices (det=0) collapse some dimension of the space, destroying information that can't be recovered.
For orthogonal matrices (rotations, reflections), the inverse is just the transpose: Tโ1=TT. This makes inversion computationally cheap.
Eigenvalue Decomposition
Factors a diagonalizable matrix as A=PDPโ1, where D is a diagonal matrix of eigenvalues and P has the corresponding eigenvectors as columns
This decomposition reveals the geometry of the transformation: eigenvalues show how much the transformation scales along each eigenvector direction, and complex eigenvalues indicate rotation.
Powers become trivial:An=PDnPโ1. You just raise each diagonal entry to the nth power. This is essential for solving systems like xโฒ=Ax in differential equations, where the solution involves eAt.
Compare: Composition vs. Eigenvalue Decomposition: composition combines different transformations sequentially, while eigenvalue decomposition breaks one transformation into scaling along special directions. For repeated application of the same transformation, eigenvalue decomposition is far more efficient.
Quick Reference Table
Concept
Best Examples
Distance-preserving (orthogonal)
Rotation, Reflection, Identity
Orientation-preserving
Rotation, Scaling (positive det), Identity
Orientation-reversing
Reflection, Scaling (negative det)
Area-preserving (det=1)
Rotation, Shear
Dimension-reducing
Projection
Requires homogeneous coordinates
Translation
Always diagonalizable
Scaling, Projection, Reflection
May be defective
Shear
Self-Check Questions
Which two transformations are orthogonal (preserve distances), and how can you distinguish them using the determinant?
A matrix satisfies P2=P but P๎ =I. What type of transformation is this, and what are its possible eigenvalues?
Compare scaling and shear: which preserves area, which preserves axis directions, and how do their eigenvalue structures differ?
If you apply rotation by ฮธ followed by rotation by ฯ, what single transformation results? Does the order matter for rotations about the same axis?
Given a transformation matrix with complex eigenvalues ฮป=aยฑbi where a2+b2=1, explain geometrically what this transformation does to vectors and why it has no real eigenvectors.