Inertial navigation uses accelerometers and gyroscopes to track a vehicle's position and orientation over time. It's a core technique in robotics localization because it works without any external signals, but errors build up quickly, so it almost always needs to be paired with other sensors.
An Inertial Measurement Unit (IMU) packages accelerometers and gyroscopes together to measure specific force and angular velocity. These raw measurements get integrated mathematically to produce position, velocity, and orientation estimates. The catch: every small sensor error compounds with each integration step, which is why sensor fusion (combining IMU data with GPS, cameras, or lidar) is so important.
Principles of inertial navigation
Dead reckoning vs inertial navigation
These two approaches are related but distinct. Dead reckoning estimates position from a previously known position plus velocity and elapsed time. Think of a wheeled robot counting wheel rotations to figure out how far it's traveled. It's simple and cheap, but errors pile up fast because there's no way to self-correct.
Inertial navigation measures acceleration and angular velocity directly using onboard sensors, then integrates those measurements to get velocity, position, and orientation. It's more accurate in the short term because it captures actual forces acting on the robot, but it still drifts over time due to sensor imperfections.
- Dead reckoning: simpler, cheaper, larger drift
- Inertial navigation: more complex, more accurate short-term, still drifts without external corrections
Frames of reference
Inertial navigation requires you to think carefully about which coordinate system you're working in. Three frames come up constantly:
- Inertial frame: A non-accelerating, non-rotating reference frame. The Earth-Centered Inertial (ECI) frame is a common example.
- Body frame: Fixed to the robot itself and moves with it. Sensor measurements are naturally in this frame.
- Navigation frame: A local-level frame used to express position and velocity in a useful way. The North-East-Down (NED) frame is standard.
Coordinate transformations between these frames are a big part of the math in inertial navigation. You measure accelerations in the body frame, but you need position updates in the navigation frame, so you're constantly converting between them.
Equations of motion
The foundation is Newton's second law:
Accelerometers measure specific force, which is the combination of motion-induced acceleration and gravitational acceleration. To get the robot's actual acceleration, you need to subtract gravity from the accelerometer reading.
Gyroscopes measure angular velocity, which describes how fast the robot is rotating. The integration chain works like this:
- Integrate angular velocity to get orientation changes
- Use orientation to transform measured accelerations from the body frame to the navigation frame
- Subtract gravity from the transformed acceleration
- Integrate acceleration to get velocity
- Integrate velocity to get position
Each integration step amplifies any sensor errors, which is why drift is such a fundamental problem.
Inertial measurement units (IMUs)
Accelerometers
Accelerometers measure specific force along one or more axes. A 3-axis accelerometer covers all three spatial directions.
Common types include:
- Mechanical: Traditional spring-mass systems
- Piezoelectric: Generate voltage proportional to applied force
- MEMS: Tiny silicon structures on a chip, by far the most common in robotics due to low cost and small size
The main error sources are bias (a constant offset in the output even when nothing is happening), scale factor error (the sensor's sensitivity isn't perfectly calibrated), and noise (random fluctuations in the signal).
Gyroscopes
Gyroscopes measure angular velocity about one or more axes. Their output gets integrated to track how the robot's orientation changes over time.
Common types include:
- Mechanical: Spinning mass gyroscopes (older technology)
- Optical: Ring laser gyroscopes and fiber-optic gyroscopes (high accuracy, high cost)
- MEMS: Vibrating structure gyroscopes (low cost, lower accuracy)
Gyroscopes suffer from the same error categories as accelerometers: bias, scale factor errors, and noise. Gyroscope bias is particularly problematic because it causes orientation errors that grow linearly with time, and those orientation errors then corrupt every subsequent acceleration measurement.
Magnetometers
Magnetometers measure the Earth's magnetic field to determine heading (the direction the robot is facing in the horizontal plane). They provide an absolute orientation reference, unlike gyroscopes which only measure changes in orientation.
- Complement gyroscope data by preventing heading drift over time
- Vulnerable to magnetic disturbances from nearby electronics, metal structures, or motors
- Hard iron effects are constant magnetic offsets (like a magnet bolted to the robot); soft iron effects are distortions caused by nearby ferromagnetic materials that warp the field
Error sources in inertial navigation
Understanding error sources matters because they determine how fast your navigation solution degrades.
Bias errors
A bias is a constant offset in the sensor output. Even when the sensor should read zero, it reads some small nonzero value. When you integrate a biased accelerometer, position error grows proportional to (where is time). That means a tiny bias becomes a large position error surprisingly fast. Estimating and compensating for bias is one of the most important tasks in inertial navigation.

Scale factor errors
Scale factor error means the sensor's output isn't perfectly proportional to the true input. If the real acceleration is 2 m/s² but the sensor reports 2.01 m/s², that's a scale factor error. These errors are proportional to the magnitude of what's being measured, so they're worst during high-dynamic maneuvers. Calibration helps minimize them.
Misalignment errors
Misalignment means the sensor axes aren't perfectly aligned with the body frame axes. If your x-axis accelerometer is tilted slightly toward the y-axis, some y-axis acceleration leaks into the x-axis reading. This cross-coupling corrupts your navigation solution. Careful mounting and calibration reduce misalignment errors.
Random noise
High-frequency fluctuations in sensor output cause short-term navigation errors. Noise characteristics, often described by power spectral density or Allan variance, are important when designing filters to process IMU data. You can't eliminate noise, but you can model it and account for it in your estimation algorithms.
Inertial navigation algorithms
Strapdown integration
Modern IMUs are "strapdown," meaning the sensors are rigidly attached to the robot (as opposed to older gimbaled systems where sensors sat on a stabilized platform). Strapdown integration works like this:
- Read accelerometer and gyroscope measurements in the body frame
- Update the orientation estimate using gyroscope data
- Transform acceleration from the body frame to the navigation frame using the current orientation
- Subtract gravity from the navigation-frame acceleration
- Integrate to update velocity, then integrate again to update position
Because all measurements start in the body frame, coordinate transformations are required at every step. Errors from sensors and from numerical integration accumulate over time.
Attitude representation
Orientation (also called attitude) can be represented in several ways, each with tradeoffs:
- Euler angles (roll, pitch, yaw): Intuitive and easy to visualize, but suffer from gimbal lock, a singularity where you lose a degree of freedom at certain orientations (specifically when pitch reaches ±90°).
- Rotation matrices: 3×3 matrices that avoid gimbal lock, but they're computationally heavier (9 parameters for 3 degrees of freedom) and require re-orthogonalization to stay valid.
- Quaternions: 4-parameter representation that's compact, computationally efficient, and avoids gimbal lock. This is the most common choice in practice for robotics and aerospace.
Position and velocity updates
Once you have the orientation estimate, the update process follows a clear sequence:
- Transform body-frame acceleration into the navigation frame using the current attitude estimate
- Subtract gravitational acceleration to isolate the robot's true acceleration
- Integrate acceleration over the time step to update velocity
- Integrate velocity over the time step to update position
For high-accuracy applications, you also need to account for Earth's rotation and Coriolis effects (apparent forces due to operating in a rotating reference frame). For most ground robots, these effects are small enough to ignore, but they matter for aerospace and long-duration navigation.
Kalman filtering for inertial navigation
State estimation
A Kalman filter is the standard tool for combining noisy IMU measurements with a mathematical model of how the robot moves. It estimates the navigation state, which typically includes position, velocity, and orientation.
The Kalman filter is optimal (gives the best possible estimate) when the system is linear and the noise is Gaussian. Real inertial navigation isn't perfectly linear, so extended Kalman filters (EKFs) are commonly used instead.

Error state vs total state
There are two main Kalman filter formulations for inertial navigation:
- Total state: The filter directly estimates position, velocity, and orientation.
- Error state: The filter estimates the errors in the strapdown integration solution, then uses those error estimates to correct the navigation output.
The error state formulation is more common in practice. The errors tend to be small and change slowly, which keeps the system closer to linear and improves numerical stability.
Measurement updates
The Kalman filter's real power comes from measurement updates using external sensors like GPS, cameras, or lidar. Here's how it works:
- A measurement model relates what the external sensor observes to the navigation state
- The filter computes a Kalman gain based on how uncertain the IMU prediction is versus how uncertain the external measurement is
- If the IMU prediction is very uncertain, the filter trusts the external measurement more (and vice versa)
- The navigation state is corrected, reducing the drift that built up since the last update
This is why inertial navigation and external sensors are so complementary: the IMU provides smooth, high-rate estimates between updates, and the external sensor periodically corrects the drift.
Inertial navigation applications
Autonomous vehicles
IMUs provide high-frequency motion estimates (often 100-1000 Hz) that are critical for vehicle control. Inertial navigation bridges GPS gaps in urban canyons, tunnels, and under bridges where satellite signals are blocked. Fusing IMU data with cameras, lidar, and GPS produces a navigation solution that's both smooth and globally accurate.
Robotics
For mobile robots and drones, IMUs enable attitude estimation and stabilization. In GPS-denied environments like indoor spaces, warehouses, or underground mines, inertial navigation supports odometry and mapping. MEMS IMUs are the standard choice here because they're lightweight, small, and inexpensive.
Aerospace systems
Aircraft, spacecraft, and missiles use high-grade IMUs (fiber-optic or ring laser gyroscopes) that cost orders of magnitude more than MEMS sensors but drift far more slowly. Inertial navigation provides autonomous operation when external signals aren't available, with GPS integration for long-duration accuracy.
Challenges of inertial navigation
Error accumulation over time
This is the fundamental limitation. Because inertial navigation relies on integrating sensor measurements, errors grow without bound. Position error from accelerometer bias grows with , and position error from gyroscope bias grows with (because orientation errors corrupt acceleration measurements, which then get double-integrated). Without external corrections, even a good IMU will drift to unusable accuracy within minutes.
Cost vs performance tradeoffs
There's a huge range in IMU quality and price:
- MEMS IMUs: A few dollars to a few hundred dollars. Compact and light, but higher noise and bias instability. Suitable for most robotics applications with frequent external updates.
- Fiber-optic / ring laser gyroscopes: Thousands to tens of thousands of dollars. Much lower drift, but larger and heavier. Used in aerospace and military applications.
Choosing the right IMU depends on how long you need to navigate without external corrections and what accuracy you require.
Integration with other sensors
Standalone inertial navigation is rarely sufficient, so sensor fusion is essential. Key challenges include:
- Time synchronization: Sensors run at different rates and have different latencies. Measurements need to be properly timestamped.
- Coordinate frame alignment: Each sensor may define its own coordinate frame. These need to be calibrated relative to each other.
- Fusion algorithms: Kalman filters and particle filters are the main tools for optimally combining measurements from different sources.
When done well, sensor fusion lets each sensor compensate for the others' weaknesses: the IMU fills in between slow GPS updates, GPS corrects IMU drift, and cameras or lidar provide corrections in GPS-denied areas.