Vectors and vector spaces form the backbone of mathematical economics, providing tools to represent and analyze complex economic relationships. These concepts allow economists to model multidimensional data, optimize decisions, and study interactions between economic variables.
From price vectors to production possibility frontiers, vector operations enable sophisticated economic analysis. This section covers vector definitions and operations, the structure of vector spaces, and how these tools connect to real economic problems.
Definition of vectors
A vector is a mathematical object that holds multiple numerical values in a specific order. In economics, vectors let you represent several variables at once. A price vector, for example, can capture the prices of every good in an economy in a single object rather than tracking each price separately.
Components of vectors
A vector's components are the ordered list of numerical values that define it. Each component corresponds to a dimension or variable. An n-dimensional vector is written as .
For example, if you're tracking the prices of three goods (bread, milk, eggs), your price vector might be . Each component maps to one good, and the ordering matters.
Geometric representation
Geometrically, a vector is an arrow in space with a starting point (tail) and an endpoint (head). The arrow's length indicates the vector's magnitude, and its direction shows the vector's orientation.
This visualization helps when thinking about economic relationships. Two vectors pointing in similar directions suggest positively correlated variables, while perpendicular vectors suggest independence.
Vector notation
Vectors are typically written using bold lowercase letters (v) or letters with arrows (). They can appear in two formats:
- Column notation: components stacked vertically
- Row notation: components listed horizontally
Column notation is the default in most linear algebra and economics texts. Row vectors are often written as the transpose of a column vector.
Vector operations
Vector operations let you combine, scale, and compare economic quantities represented as vectors. These are the building blocks for everything that follows.
Vector addition
Vector addition combines two vectors by adding their corresponding components. If and , then:
In economics, this is how you aggregate quantities. If two factories each produce a vector of outputs, adding those vectors gives total production.
Scalar multiplication
Scalar multiplication scales a vector by a real number , changing its magnitude but not its direction (unless is negative, which reverses direction):
This models proportional changes. If a price vector is multiplied by 1.05, every price increases by 5%.
Dot product
The dot product (or inner product) multiplies two vectors component-wise and sums the results, producing a scalar:
This is one of the most useful operations in economics. Total revenue, for instance, is the dot product of a price vector and a quantity vector. If prices are and quantities are , total revenue is .
The dot product also measures how aligned two vectors are. A dot product of zero means the vectors are orthogonal (perpendicular), indicating no linear relationship.
Cross product
The cross product is defined only in 3D space and produces a vector perpendicular to both inputs:
Its magnitude equals the area of the parallelogram formed by the two vectors. The cross product is rare in economics since most economic applications involve spaces with more (or fewer) than three dimensions, but it appears occasionally in specialized models.
Vector spaces
A vector space is a collection of vectors that you can add together and scale by real numbers while staying within the collection. This structure gives economists a rigorous framework for analyzing systems of linear relationships.
Properties of vector spaces
A valid vector space must satisfy these axioms:
- Closure: adding two vectors in the space or multiplying a vector by a scalar always produces another vector in the space
- Associativity and commutativity of vector addition
- Distributivity of scalar multiplication over vector addition
- Existence of a zero vector (the additive identity)
- Existence of an additive inverse for every vector
These properties guarantee that algebraic manipulations behave predictably, which is essential when building economic models.

Subspaces
A subspace is a subset of a vector space that is itself a vector space. To qualify, it must contain the zero vector and be closed under addition and scalar multiplication.
In economics, subspaces can represent constrained portions of a larger system. For example, the set of all consumption bundles that satisfy a particular budget constraint forms a subspace (or, more precisely, a related geometric object like a hyperplane) within the full commodity space.
Linear independence
A set of vectors is linearly independent if no vector in the set can be written as a linear combination of the others. Formally, vectors are linearly independent if the only solution to:
is .
This matters because linearly dependent vectors contain redundant information. In an economic model, if one variable is a perfect linear combination of others, it adds no new explanatory power. Identifying independence helps you find the minimal set of variables needed to describe a system.
Basis and dimension
These concepts tell you about the fundamental structure of a vector space: how many independent directions it has and how to represent any vector within it.
Basis vectors
A basis is a set of linearly independent vectors that spans the entire vector space. "Spans" means every vector in the space can be written as a linear combination of the basis vectors, and "linearly independent" means none of the basis vectors are redundant.
Any vector in the space can be uniquely expressed as:
where are the basis vectors and are scalars (the coordinates).
Dimension of vector spaces
The dimension of a vector space is the number of vectors in any basis. This tells you the degrees of freedom in the system.
An economy with three independent goods has a 3-dimensional commodity space. A model with 50 independent variables operates in a 50-dimensional space. Dimension determines how complex the system is and how many equations you need to pin down a unique solution.
Change of basis
Sometimes it's useful to express the same vectors using a different set of basis vectors. This is done through a transformation matrix that converts coordinates from one basis to another.
Why bother? Different bases can reveal different structure in the data. In economics, switching bases might simplify a model, diagonalize a system, or align variables with economically meaningful directions (like principal components in data analysis).
Linear transformations
A linear transformation is a function between vector spaces that preserves addition and scalar multiplication. These transformations model how economic inputs map to outputs while maintaining proportional relationships.
Matrix representation
Every linear transformation can be represented as a matrix. For a transformation , there exists a matrix such that:
for all vectors . This is powerful because it reduces the study of transformations to the study of matrices, which are straightforward to compute with. An input-output model of an economy, for instance, uses a matrix to describe how each sector's output depends on inputs from every other sector.
Eigenvalues and eigenvectors
An eigenvector of a matrix is a nonzero vector that, when transformed by , only gets scaled (not rotated):
The scalar is the corresponding eigenvalue.
Eigenvectors identify the directions in which a linear transformation acts as pure scaling. In economics, eigenvalues appear in dynamic models: if you model an economy's evolution as a linear system, the eigenvalues tell you whether the system grows, shrinks, or stays stable over time. Eigenvalues greater than 1 in absolute value indicate explosive growth or decline; values less than 1 indicate convergence to equilibrium.
Applications in economics
Vectors appear throughout economic modeling. Here are the most common applications you'll encounter.

Price vectors
A price vector lists the prices of goods. This compact representation is central to consumer theory. Budget constraints, for example, take the form , where is a consumption bundle and is income.
Tracking price vectors over time also lets you study inflation across sectors simultaneously.
Quantity vectors
A quantity vector represents amounts of goods produced, consumed, or traded. In production theory, input and output quantity vectors describe what goes into and comes out of a production process.
Combined with price vectors through the dot product, quantity vectors yield total cost, total revenue, or total expenditure.
Production possibility frontiers
The production possibility frontier (PPF) describes the maximum combinations of goods an economy can produce given fixed resources. In a two-good economy, the PPF is a curve in 2D space. In a multi-good economy, it becomes a surface (or hypersurface) in higher-dimensional vector space.
Each point on the PPF is a production vector. Moving along the frontier illustrates opportunity costs: producing more of one good requires producing less of another.
Vector calculus
Vector calculus extends the tools of single-variable calculus to functions of multiple variables. This is essential in economics because most economic functions depend on several inputs simultaneously.
Gradient vectors
The gradient of a scalar-valued function is the vector of all its partial derivatives:
The gradient points in the direction of steepest increase of . In optimization, setting is the first-order condition for finding maxima or minima. For a utility function, the gradient at a point tells you which combination of small changes in consumption would increase utility the fastest.
Directional derivatives
The directional derivative measures the rate of change of a function in a specific direction, given by a unit vector :
This is useful when you want to know how an economic outcome changes along a particular path, not just along coordinate axes. For example, if a firm changes two inputs simultaneously, the directional derivative tells you the resulting rate of change in output.
Optimization with vectors
Many core economic problems reduce to optimization: maximizing utility, minimizing cost, or allocating resources efficiently. Vector methods handle the multivariable nature of these problems.
Constrained optimization
Most economic optimization problems involve constraints (budgets, resource limits, capacity). The general form is:
- Maximize (or minimize)
- Subject to (or inequalities)
where both and may be functions of vector-valued inputs.
Lagrange multipliers
The Lagrange multiplier method solves equality-constrained optimization problems. The key idea: at an optimum, the gradient of the objective function must be proportional to the gradient of the constraint.
-
Set up the Lagrangian:
-
Take partial derivatives of with respect to each variable and
-
Set all partial derivatives equal to zero
-
Solve the resulting system of equations for and
The multiplier has an economic interpretation: it represents the shadow price of the constraint, or how much the optimal value of would change if the constraint were relaxed by one unit. In utility maximization, is the marginal utility of income.
For inequality constraints, the more general Karush-Kuhn-Tucker (KKT) conditions extend this approach.
Vector spaces in econometrics
Econometrics applies linear algebra to estimate economic relationships from data. Vector spaces provide the geometric intuition behind regression and estimation.
Regression analysis
In regression, you model a dependent variable as a linear combination of predictor variables (columns of a matrix ) plus an error term. The regression coefficients form a vector.
Geometrically, regression projects the vector onto the column space of . The predicted values are the closest point in that subspace to the actual , and the residuals are the perpendicular distance between them.
Least squares estimation
Ordinary Least Squares (OLS) finds the coefficient vector that minimizes the sum of squared residuals. The solution is:
where is the transpose of . This formula requires that is invertible, which happens when the columns of are linearly independent (no perfect multicollinearity among predictors).
The projection interpretation explains why: OLS finds the linear combination of predictors that comes closest to in a least-squares sense, which is exactly the orthogonal projection onto the column space of .