State feedback control overview
State feedback control uses knowledge of a system's internal state variables to generate control inputs that drive the system toward a desired behavior. Unlike classical output feedback (where you only use the measured output), state feedback gives you access to the full internal picture, which means you can shape the system's dynamics much more precisely.
This technique is especially valuable for MIMO (multiple-input, multiple-output) systems, where classical transfer function methods become unwieldy. The core idea: measure (or estimate) all the state variables, multiply them by a gain matrix, and feed the result back as your control input.
State space representation
A state-space model captures a system's dynamics using four matrices. The standard continuous-time linear model looks like this:
where is the state vector, is the input vector, and is the output vector.
State variables
State variables are the minimum set of variables that fully describe a system's condition at any point in time. If you know the current state and all future inputs, you can predict the system's entire future behavior.
- Common physical examples: position, velocity, current, temperature, pressure
- The number of state variables equals the system order and defines the dimension of the state space
- The choice of state variables isn't unique. Different valid choices lead to different (but equivalent) state-space representations related by a similarity transformation
State transition matrix
The matrix (sometimes called the system matrix or state matrix) governs how the states evolve on their own, without any external input. Its entries encode the linear relationships between state variables and their derivatives.
- is a square matrix, where is the number of state variables
- The eigenvalues of are the open-loop poles of the system, which determine natural stability and dynamic behavior
- Note: the term "state transition matrix" more precisely refers to (the matrix exponential of ), which maps the state from one time to another. The matrix itself is the system matrix.
Input matrix
The matrix maps control inputs to changes in the state variables. It tells you which states are directly influenced by each input and how strongly.
- Dimensions: , where is the number of states and is the number of inputs
- If a column of is all zeros, that input has no direct effect on any state variable
Output matrix
The matrix relates the internal states to the measurable outputs. It defines which linear combinations of state variables you can actually observe through your sensors.
- Dimensions: , where is the number of outputs and is the number of states
Feedthrough matrix
The matrix captures any direct path from the input to the output that bypasses the state dynamics entirely.
- Dimensions:
- In most physical systems, because inputs must propagate through the system dynamics before affecting outputs. A nonzero appears in systems where the input instantaneously influences the output.
Pole placement
Desired pole locations
Pole placement is the technique of choosing a feedback gain so that the closed-loop system has poles exactly where you want them in the complex plane. Since pole locations dictate transient behavior, this gives you direct control over how the system responds.
- Poles further left in the complex plane produce faster responses
- Complex conjugate poles with larger real-to-imaginary ratios yield higher damping (less oscillation)
- You pick pole locations based on specs like settling time, overshoot, and damping ratio
Characteristic equation
The characteristic equation of the open-loop system is:
The roots of this polynomial are the open-loop poles. With state feedback , the closed-loop system matrix becomes , and the new characteristic equation is:
By choosing , you manipulate the coefficients of this polynomial to place the roots at your desired locations.
Controllability
Before attempting pole placement, you must verify that the system is controllable. Controllability means you can steer the state from any initial condition to any final condition in finite time.
The controllability matrix is:
The system is controllable if and only if has full row rank (rank ). If the system is not controllable, you cannot place all poles arbitrarily.
State feedback gain matrix

Feedback gain calculations
The state feedback control law is:
where is the gain matrix. The goal is to find such that the eigenvalues of match your desired pole locations.
For a single-input system, the procedure is:
-
Write out the desired characteristic polynomial from your chosen pole locations:
-
Expand the closed-loop characteristic polynomial in terms of the unknown gains
-
Match coefficients between the two polynomials
-
Solve the resulting system of equations for the entries of
Ackermann's formula
For single-input systems, Ackermann's formula provides a direct, closed-form solution:
where is the controllability matrix and is the desired characteristic polynomial evaluated at the matrix . This avoids coefficient matching entirely, but it requires to be invertible (i.e., the system must be controllable). For multi-input systems or high-order systems, numerical methods are generally preferred.
Linear quadratic regulator (LQR)
Pole placement lets you put poles wherever you want, but it doesn't tell you where to put them. LQR solves this by framing the problem as an optimization: find the gain that minimizes a cost function balancing state regulation against control effort.
Cost function
The standard LQR cost function is:
- is a positive semi-definite matrix that penalizes deviations in the state variables
- is a positive definite matrix that penalizes control effort
- Increasing the weight on relative to produces a more aggressive controller (faster response, larger control signals). Increasing relative to produces a more conservative controller.
Riccati equation
The optimal gain requires solving the continuous-time algebraic Riccati equation (CARE):
where is a positive definite symmetric matrix. This is a nonlinear matrix equation, but standard software (MATLAB's lqr, Python's scipy.linalg.solve_continuous_are) handles it reliably.
LQR gain matrix
Once you have , the optimal gain is:
The resulting closed-loop system is guaranteed to be stable (assuming controllability) and has good robustness properties, including at least 60° of phase margin and infinite gain margin at each input channel for single-input systems.
Observer design
In practice, you rarely have direct access to all state variables. Observers (state estimators) reconstruct the full state vector from the available outputs.
Observability
Observability determines whether you can infer the full internal state from the output measurements. The observability matrix is:
The system is observable if and only if has full column rank (rank ). Without observability, no observer can reconstruct the complete state.
Luenberger observer
A Luenberger observer is a model-based estimator with the following structure:
where is the estimated state and is the observer gain matrix. The term corrects the estimate based on the difference between the actual output and the predicted output .
The estimation error evolves as:
By choosing so that has eigenvalues with negative real parts (placed further left than the controller poles, typically 2-5 times faster), the estimate converges to the true state.
Kalman filter
The Kalman filter is the optimal observer when process noise and measurement noise are present. It minimizes the mean square estimation error under the assumption that:
- Process noise and measurement noise are zero-mean, white, and Gaussian
- Their covariance matrices and are known
The Kalman gain is computed by solving a Riccati equation analogous to the LQR problem, but for the estimation side. The Kalman filter automatically balances trust in the model versus trust in the measurements based on the noise covariances.
Separation principle

Controller and observer design
The separation principle is what makes the observer-based controller design tractable. It states that for linear systems, you can:
- Design the state feedback gain as if you had perfect state measurements
- Design the observer gain independently to estimate the states
- Combine them: use where comes from the observer
The two designs do not interfere with each other.
Closed-loop stability
The combined controller-observer system has a characteristic polynomial that factors into two independent parts:
This means the closed-loop poles are simply the union of the controller poles and the observer poles. If both sets of poles are in the left half-plane, the overall system is stable. This factoring property is specific to linear systems and does not generally hold for nonlinear systems.
Integral control
Steady-state error
A standard state feedback controller can regulate the state to zero, but it won't necessarily track a nonzero reference or reject constant disturbances with zero steady-state error. The problem is that there's nothing in the control law that "remembers" accumulated error.
Augmented state space model
To fix this, you augment the state vector with an integral state:
where is the reference and is the output. The augmented state vector becomes:
The augmented system matrices are constructed to include the integrator dynamics, and you design a gain for the augmented system. The portion provides the integral action that drives steady-state error to zero for constant references and disturbances.
Robust control
Parameter uncertainties
Real systems never match their models perfectly. Parameter uncertainties come from modeling approximations, manufacturing tolerances, component aging, and varying operating conditions. A controller that works perfectly for the nominal model but fails under small perturbations is not useful.
Sensitivity analysis
Sensitivity functions quantify how much the closed-loop behavior changes when parameters vary. The sensitivity transfer function and complementary sensitivity function satisfy:
Low sensitivity (small ) at a given frequency means good disturbance rejection there, but the constraint means you can't have low sensitivity everywhere. Sensitivity analysis identifies which parameters matter most and guides robust design.
H-infinity control
control minimizes the worst-case gain (the norm) of a specified closed-loop transfer function. The norm is the peak value of the maximum singular value across all frequencies:
This approach provides hard guarantees on performance and stability under the modeled uncertainty. The design involves solving two Riccati equations (or an equivalent LMI problem) and produces a controller that is robust by construction, though often more conservative than LQR.
Digital implementation
Discretization methods
To implement state feedback on a digital computer, you need discrete-time equivalents of the continuous-time system and controller.
Common discretization methods:
- Zero-order hold (ZOH): Assumes the control input is held constant between samples. Produces exact discrete-time matrices and , where is the sampling period. This is the most common choice.
- Tustin (bilinear) approximation: Maps the -plane to the -plane using . Preserves frequency-domain properties better than ZOH for some applications.
Sampling and reconstruction
- The sampling rate must satisfy the Nyquist-Shannon theorem: sample at least twice the highest frequency of interest to avoid aliasing. In practice, sampling 10-20 times the closed-loop bandwidth is a common rule of thumb.
- A zero-order hold on the output side reconstructs the continuous-time control signal by holding each computed value constant until the next sample.
Digital controller design
Once you have the discrete-time model , you can apply the same design techniques (pole placement, LQR) using the discrete-time matrices. MATLAB's dlqr and place functions handle this directly. The key consideration is that desired pole locations must be mapped from the -plane to the -plane using , so stable poles must lie inside the unit circle rather than in the left half-plane.