upgrade
upgrade

Linear Algebra for Data Science

Linear Transformation Examples

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Linear transformations are the workhorses of data science—every time you rotate an image, reduce dimensions with PCA, or normalize features, you're applying a linear transformation. Understanding these operations means understanding how data moves, stretches, and projects through the mathematical spaces that power machine learning algorithms. You're being tested on your ability to recognize what each transformation does geometrically, how its matrix structure produces that effect, and when to apply it in practice.

Don't just memorize the matrix formulas. Know what concept each transformation illustrates: Does it preserve distances? Change orientation? Reduce dimensionality? The difference between a rotation and a reflection might seem subtle on paper, but understanding their geometric and algebraic properties will help you debug algorithms, interpret results, and ace exam questions that ask you to identify or construct the right transformation for a given task.


Transformations That Preserve Shape (Isometries)

These transformations maintain distances between points—objects keep their size and shape, just repositioned. Isometric transformations have orthogonal matrices where ATA=IA^T A = I.

Rotation Matrices

  • Rotate points around the origin by angle θ\theta—fundamental for image processing and coordinate system changes
  • Standard 2D form: (cos(θ)sin(θ)sin(θ)cos(θ))\begin{pmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{pmatrix}—note the antisymmetric sine placement
  • Preserve distances and angles (isometric), with determinant = 1, meaning no scaling or reflection occurs

Reflection Matrices

  • Flip points across a specified axis—reverses orientation while preserving all distances
  • X-axis reflection: (1001)\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}; Y-axis reflection swaps the negative to position (1,1)
  • Determinant = -1 distinguishes reflections from rotations—both are isometric, but reflections reverse handedness

Permutation Matrices

  • Rearrange vector or matrix elements by swapping rows or columns—essential for sorting algorithms and LU decomposition
  • Structure: exactly one 1 per row and column, zeros elsewhere—multiplying reorders without scaling
  • Orthogonal matrices (PT=P1P^T = P^{-1}) that preserve norms and are their own category of isometry

Compare: Rotation vs. Reflection—both preserve distances and angles, but rotations have determinant +1 (preserve orientation) while reflections have determinant -1 (reverse orientation). If asked to classify an isometry, check the determinant first.


Transformations That Change Scale

These transformations stretch, compress, or independently rescale coordinates. The eigenvalues of these matrices directly correspond to the scaling factors along principal directions.

Scaling Transformations

  • Stretch or compress along coordinate axes—represented by (sx00sy)\begin{pmatrix} s_x & 0 \\ 0 & s_y \end{pmatrix} where sx,sys_x, s_y are scale factors
  • Non-uniform scaling occurs when sxsys_x \neq s_y, distorting shapes while keeping axes aligned
  • Determinant = sxsys_x \cdot s_y gives the area scaling factor—critical for understanding volume changes in higher dimensions

Diagonal Matrices

  • Scale each coordinate independently—the general form (d100d2)\begin{pmatrix} d_1 & 0 \\ 0 & d_2 \end{pmatrix} makes computation trivial
  • Eigenvalues are the diagonal entries themselves—diagonal matrices are already in their simplest spectral form
  • Powers and inverses are easy: just raise or invert each diagonal entry, making these ideal for iterative algorithms

Compare: Scaling vs. Diagonal matrices—scaling transformations are diagonal matrices, but "diagonal matrix" is the broader algebraic category. When discussing geometry, say "scaling"; when discussing computation or eigendecomposition, say "diagonal."


Transformations That Distort Shape

These transformations change angles or collapse dimensions—shapes don't stay congruent. Shears preserve area but not angles; projections reduce rank and lose information.

Shear Transformations

  • Slide points parallel to an axis—horizontal shear: (1k01)\begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix} shifts x-coordinates by k×yk \times y
  • Determinant = 1 means area is preserved despite the distortion—useful for understanding volume-preserving flows
  • Common in graphics and typography—creates italic effects and parallelogram distortions from rectangles

Projection Matrices

  • Map points onto a lower-dimensional subspace—the foundation of PCA and least-squares regression
  • Idempotent property: P2=PP^2 = P—projecting twice gives the same result as projecting once
  • Symmetric projections (P=PTP = P^T) project onto orthogonal subspaces—key for finding closest points to subspaces

Compare: Shear vs. Projection—shears are invertible (determinant ≠ 0) and preserve dimensionality, while projections are typically singular and reduce rank. If an FRQ asks about dimensionality reduction, projection is your answer; if it asks about invertible distortion, think shear.


Transformations That Extend Linear Algebra

These handle special cases: doing nothing, collapsing everything, or incorporating translation. Understanding these edge cases clarifies what "linear transformation" really means.

Identity Transformation

  • Maps every point to itself—represented by I=(1001)I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} in 2D
  • Multiplicative identity: AI=IA=AAI = IA = A for any conformable matrix—the "do nothing" operation
  • All eigenvalues equal 1—the baseline for comparing how other transformations stretch or compress

Zero Transformation

  • Collapses all points to the origin—represented by the zero matrix (0000)\begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}
  • Rank = 0 and nullity = full dimension—the entire input space maps to a single point
  • Illustrates the extreme case of information loss—useful for understanding null spaces and kernel concepts

Translation (Homogeneous Coordinates)

  • Shifts points by a constant vectornot linear in standard coordinates because T(0)0T(0) \neq 0
  • Homogeneous form: (10tx01ty001)\begin{pmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \\ 0 & 0 & 1 \end{pmatrix} embeds translation into matrix multiplication
  • Enables composition of rotation, scaling, and translation in a single matrix—standard in computer graphics pipelines

Compare: Identity vs. Zero transformation—both are trivial but opposite extremes. Identity preserves everything (full rank, all eigenvalues = 1); zero destroys everything (rank 0, all eigenvalues = 0). These bookend the spectrum of linear transformations.


Quick Reference Table

ConceptBest Examples
Distance-preserving (isometric)Rotation, Reflection, Permutation
Scaling/stretchingScaling transformation, Diagonal matrix
Shape distortionShear transformation
Dimensionality reductionProjection matrix
Determinant = 1 (area-preserving)Rotation, Shear
Determinant = -1 (orientation-reversing)Reflection
Idempotent (P2=PP^2 = P)Projection matrix, Identity
Non-linear made linearTranslation (via homogeneous coordinates)

Self-Check Questions

  1. Which two transformations are both isometric but differ in their determinant sign, and what does that sign indicate geometrically?

  2. You apply a transformation twice and get the same result as applying it once. Which transformations have this property, and what is it called?

  3. Compare and contrast scaling transformations and shear transformations: which preserves angles? Which preserves area? Which is always invertible?

  4. If you need to combine rotation, scaling, and translation into a single matrix multiplication, what technique must you use and why?

  5. A transformation has eigenvalues all equal to 1 but is not the identity matrix. Which transformation from this guide fits that description, and what does it do geometrically?