Abstract Linear Algebra II Unit 2 – Linear Transformations

Linear transformations are the backbone of abstract linear algebra, bridging vector spaces and preserving their structure. They map vectors between spaces while maintaining linearity, allowing us to analyze complex systems through simpler mathematical representations. Understanding linear transformations unlocks powerful tools like eigenvalues, diagonalization, and isomorphisms. These concepts have wide-ranging applications, from quantum mechanics to computer graphics, making them essential for both theoretical understanding and practical problem-solving in various fields.

Key Concepts and Definitions

  • Linear transformations map vectors from one vector space to another while preserving linear combinations and the zero vector
  • Domain refers to the vector space where the linear transformation starts, and codomain is the vector space where it ends up
  • Isomorphisms are bijective linear transformations with linearly independent columns in their matrix representation
  • Endomorphisms are linear transformations from a vector space to itself
  • Automorphisms are invertible endomorphisms, meaning they have an inverse transformation that "undoes" the original
  • Eigenvalues are scalars λ\lambda that satisfy the equation T(v)=λvT(v) = \lambda v for some nonzero vector vv and linear transformation TT
    • The corresponding nonzero vectors vv are called eigenvectors
  • Diagonalization expresses a linear transformation as a diagonal matrix, which simplifies computations and analysis

Vector Spaces and Linear Maps

  • Vector spaces are sets that are closed under vector addition and scalar multiplication, with a zero vector
  • Linear maps, or linear transformations, preserve the vector space structure when mapping between two vector spaces
    • They satisfy T(u+v)=T(u)+T(v)T(u + v) = T(u) + T(v) and T(cu)=cT(u)T(cu) = cT(u) for vectors u,vu, v and scalar cc
  • The kernel of a linear transformation TT is the set of vectors that map to the zero vector, ker(T)={v:T(v)=0}\ker(T) = \{v : T(v) = 0\}
  • The image or range of TT is the set of all vectors in the codomain that TT maps to, Im(T)={T(v):vis in the domain}\text{Im}(T) = \{T(v) : v \text{is in the domain}\}
  • Rank-nullity theorem states that for a linear map T:VWT: V \to W, dim(V)=dim(ker(T))+dim(Im(T))\dim(V) = \dim(\ker(T)) + \dim(\text{Im}(T))
  • Isomorphic vector spaces have the same dimension and are structurally identical, even if they appear different

Properties of Linear Transformations

  • Linearity is the defining property, meaning preserving vector addition and scalar multiplication
  • Injective (one-to-one) linear transformations have a trivial kernel, containing only the zero vector
  • Surjective (onto) linear transformations have an image equal to the entire codomain
  • Bijective linear transformations are both injective and surjective, and have an inverse transformation
  • Compositions of linear transformations are linear, i.e., if SS and TT are linear, then STS \circ T is linear
  • The identity transformation I(v)=vI(v) = v maps each vector to itself and is linear
  • The zero transformation maps every vector to the zero vector and is linear

Matrix Representations

  • Every linear transformation can be represented by a matrix with respect to chosen bases for the domain and codomain
  • The matrix AA of a linear transformation TT satisfies T(v)=AvT(v) = Av for all vectors vv in the domain
  • Changing bases corresponds to similarity transformations on the matrix, A=P1APA' = P^{-1}AP, where PP is the change of basis matrix
  • Matrix multiplication represents the composition of linear transformations
  • The determinant of the matrix determines if the transformation is invertible (nonzero determinant) or not (zero determinant)
  • Eigenvalues and eigenvectors can be found using the characteristic equation det(AλI)=0\det(A - \lambda I) = 0
  • Diagonalization is possible when there is a basis of eigenvectors, resulting in a diagonal matrix representation

Kernel and Image

  • The kernel is the set of all vectors that map to the zero vector under the transformation
    • It forms a subspace of the domain and its dimension is called the nullity
  • The image is the set of all vectors in the codomain that are outputs of the transformation
    • It forms a subspace of the codomain and its dimension is called the rank
  • Rank-nullity theorem relates the dimensions of the kernel, image, and domain: dim(V)=dim(ker(T))+dim(Im(T))\dim(V) = \dim(\ker(T)) + \dim(\text{Im}(T))
  • Injectivity is equivalent to having a trivial kernel (only contains the zero vector)
  • Surjectivity is equivalent to the image being equal to the entire codomain
  • The first isomorphism theorem states that a linear transformation T:VWT: V \to W induces an isomorphism between V/ker(T)V/\ker(T) and Im(T)\text{Im}(T)

Eigenvalues and Eigenvectors

  • Eigenvectors are nonzero vectors that, when transformed, result in a scalar multiple of themselves
    • The scalar multiple is called the eigenvalue, satisfying T(v)=λvT(v) = \lambda v
  • Eigenspaces are the sets of all eigenvectors corresponding to a specific eigenvalue, together with the zero vector
  • The characteristic equation det(AλI)=0\det(A - \lambda I) = 0 is used to find eigenvalues, where AA is the matrix of the transformation
  • Algebraic multiplicity of an eigenvalue is its multiplicity as a root of the characteristic equation
  • Geometric multiplicity of an eigenvalue is the dimension of its corresponding eigenspace
  • Diagonalizable matrices have a basis of eigenvectors and can be written as A=PDP1A = PDP^{-1}, where DD is diagonal and PP contains eigenvectors

Applications and Examples

  • Markov chains use stochastic matrices to model transitions between states, with eigenvalues and eigenvectors providing long-term behavior
  • Quantum mechanics represents states as vectors and observables as linear operators, with eigenvalues and eigenvectors corresponding to measurable quantities
  • Computer graphics use linear transformations for scaling, rotation, reflection, and shearing of images
    • Homogeneous coordinates allow for the representation of translations as matrix operations
  • Fourier transforms decompose functions into sums of simpler trigonometric functions, which is a change of basis
  • Least squares fitting finds the best linear approximation to data by minimizing the sum of squared errors
  • Differential equations can be solved using eigenvalues and eigenvectors of the associated linear operator
  • Principal component analysis (PCA) uses eigenvectors of the covariance matrix to find the most informative directions in data

Common Pitfalls and Tips

  • Ensure that transformations are well-defined and linear by checking the properties on the entire domain
  • Be careful when using matrix representations, as they depend on the choice of bases for the domain and codomain
  • Eigenvalues and eigenvectors are only defined for square matrices, so the domain and codomain must have the same dimension
  • Not all matrices are diagonalizable; they must have a full set of linearly independent eigenvectors
  • The zero vector is not an eigenvector, even though it satisfies the eigenvector equation for any eigenvalue
  • Algebraic and geometric multiplicities of eigenvalues can differ, with geometric multiplicity always less than or equal to algebraic multiplicity
  • When using linear transformations in applications, be aware of any assumptions or limitations of the model
  • Practice visualizing linear transformations in 2D and 3D to build intuition, then generalize to higher dimensions


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.