Fiveable

🤖Intro to Autonomous Robots Unit 5 Review

QR code for Intro to Autonomous Robots practice questions

5.1 Odometry

5.1 Odometry

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🤖Intro to Autonomous Robots
Unit & Topic Study Guides

Odometry is a technique for estimating a robot's position and orientation based on its motion. By measuring how wheels rotate (or how the body accelerates), a robot can keep a running estimate of where it is. This makes odometry one of the most fundamental building blocks for navigation and localization.

The catch: odometry errors accumulate over time, so the estimate drifts further and further from reality. That's why odometry is almost always combined with other sensors to keep the robot on track.

Odometry Overview

Odometry works by reading motion sensors and translating those readings into position and orientation updates. The most common sensor for wheeled robots is the wheel encoder, but inertial measurement units (IMUs) can also contribute motion data.

The robot maintains a pose, which is its estimated position and orientation in space. Every time the sensors report new motion, the pose gets updated. Over short distances, this works well. Over long distances, small errors in each update start to add up.

Wheel Encoders for Odometry

Wheel encoders measure how much each wheel has rotated. They generate pulses as the wheel turns, and each pulse corresponds to a small increment of rotation.

  • Encoder types include optical, magnetic, and capacitive, but they all do the same basic job: output a signal proportional to wheel rotation.
  • By counting pulses and multiplying by the known wheel circumference, the robot calculates the linear distance that wheel has traveled.
  • For example, if an encoder produces 1000 pulses per revolution and the wheel has a circumference of 0.314 m, each pulse represents about 0.000314 m of travel.

Odometry Calculations

For a differential drive robot (two independently driven wheels on a shared axle), odometry uses the velocities of the left and right wheels to compute how the robot moved.

The core equations:

  • Δx=(vr+vl)2Δtcos(θ)\Delta x = \frac{(v_r + v_l)}{2} \Delta t \cos(\theta)
  • Δy=(vr+vl)2Δtsin(θ)\Delta y = \frac{(v_r + v_l)}{2} \Delta t \sin(\theta)
  • Δθ=(vrvl)LΔt\Delta \theta = \frac{(v_r - v_l)}{L} \Delta t

Where:

  • vrv_r and vlv_l are the right and left wheel velocities
  • Δt\Delta t is the time interval between updates
  • θ\theta is the robot's current heading angle
  • LL is the distance between the two wheels (the wheelbase)

The first two equations compute how far the robot moved in the x and y directions. The third computes how much the robot rotated. If both wheels spin at the same speed, Δθ=0\Delta \theta = 0 and the robot drives straight. If they differ, the robot turns.

Each update cycle, these deltas get added to the robot's current pose estimate.

Odometry in 2D Space

Odometry typically tracks the robot in a 2D plane using three variables:

  • x-coordinate and y-coordinate for position
  • Heading angle θ\theta for orientation

These values are relative to wherever the robot started, which becomes the origin of a local coordinate frame. Odometry doesn't know anything about the global world; it only knows how far it has moved from its starting point.

Advantages of Odometry

  • Simple and computationally cheap. The math is straightforward and runs easily at high update rates.
  • No external infrastructure required. Unlike GPS or beacons, odometry works anywhere with no setup.
  • High-frequency updates. Encoders can report hundreds or thousands of times per second, giving smooth, continuous pose tracking.
  • Low cost. Wheel encoders are inexpensive compared to lidar, cameras, or GPS receivers.

Limitations of Odometry

  • Cumulative errors. Every small measurement error gets baked into all future estimates. There's no self-correction.
  • No absolute position. Odometry only tracks relative motion from a starting point. It can't tell the robot where it is in the world.
  • Vulnerable to real-world conditions. Wheel slippage, bumpy terrain, and sensor noise all degrade accuracy.
  • Degrades over distance. The longer the robot drives, the less trustworthy the estimate becomes.

Odometry Error Sources

Odometry errors fall into two categories: systematic (predictable, repeatable) and non-systematic (random, unpredictable). Both contribute to drift, but they require different strategies to address.

Systematic Odometry Errors

Systematic errors come from imperfections in the robot's hardware or calibration. They produce consistent bias in the same direction every time.

Common sources:

  • Wheel diameter mismatch. If the left wheel is slightly larger than the right, the robot will consistently curve to one side even when commanded to drive straight.
  • Wheel misalignment. Wheels that aren't perfectly parallel introduce a constant rotational bias.
  • Encoder resolution limits. Coarse encoders can't capture fine movements, rounding off small motions.

Because these errors are consistent, you can measure and compensate for them through calibration. The UMBmark (University of Michigan Benchmark) is a standard test where the robot drives in a square and the final position error reveals systematic biases in the odometry system.

Non-Systematic Odometry Errors

Non-systematic errors are random and harder to predict. They come from the environment or unexpected events.

Common sources:

  • Wheel slippage on smooth or wet surfaces
  • Uneven terrain causing inconsistent wheel contact
  • Sudden accelerations or decelerations
  • Objects caught under wheels

These errors can't be calibrated away. Instead, they're handled through additional sensing, filtering, or probabilistic methods that account for uncertainty.

Wheel encoders for odometry, PPM Encoder — Rover documentation

Wheel Slippage Impact

Wheel slippage happens when a wheel rotates but doesn't actually move the robot the expected distance. The encoder still counts pulses, so the odometry system thinks the robot moved, but it didn't (or moved less than expected).

Slippage is common on:

  • Low-friction surfaces (tile, polished concrete, wet ground)
  • Steep inclines where wheels lose traction
  • During rapid acceleration or hard braking

Mitigation strategies include traction control systems, slip detection algorithms (comparing expected vs. actual motion using an IMU), and slip compensation models.

Uneven Terrain Challenges

Bumps, holes, and slopes cause problems because they change how the wheels interact with the ground. A wheel rolling over a bump rotates more than it would on flat ground, but the robot's actual forward progress is different from what the encoder reports.

Approaches to reduce terrain-related errors include using larger wheels (which are less affected by small obstacles), adding suspension systems, and using adaptive odometry algorithms that adjust calculations based on detected terrain conditions.

Odometry Error Accumulation

The defining weakness of odometry is that errors accumulate without bound. Each update introduces a small error, and since every future update builds on the previous estimate, those errors compound.

Unbounded Error Growth

Unlike a sensor that gives you an independent measurement each time (like GPS), odometry is purely incremental. There's no built-in mechanism to "reset" or correct the estimate.

Error growth is roughly proportional to the distance traveled. A robot that has driven 100 meters will generally have more odometry drift than one that has driven 10 meters. This makes odometry unreliable for long-duration navigation without some form of external correction.

Odometry Drift Over Time

Drift is the gradual divergence between where the robot thinks it is and where it actually is. Small heading errors are especially damaging because a slight error in θ\theta causes the robot to project all future motion in a slightly wrong direction, and that directional error grows with distance.

For example, a heading error of just 1 degree means the robot's position estimate will be off by about 1.7 meters after traveling 100 meters, even if the distance measurement is perfect.

Odometry vs. Ground Truth

Ground truth is the robot's actual pose, measured by an independent, accurate system. Common ground truth sources include:

  • Motion capture systems (very precise, indoor only)
  • GPS (outdoor, meter-level accuracy for standard receivers)
  • Fiducial markers (known visual targets placed in the environment)

Comparing odometry to ground truth lets you quantify how much drift has accumulated and evaluate whether your calibration or sensor fusion is working.

Error Correction Techniques

Since odometry drifts, you need ways to pull the estimate back toward reality:

  1. Sensor fusion combines odometry with other sensors (IMU, GPS, lidar) so that each sensor compensates for the others' weaknesses.
  2. Kalman filters (especially the Extended Kalman Filter) and particle filters are the standard algorithms for fusing multiple sensor streams into a single probabilistic pose estimate.
  3. Loop closure detection recognizes when the robot returns to a previously visited location and uses that recognition to correct accumulated drift.
  4. Landmark-based correction uses known features in the environment to periodically reset or adjust the pose estimate.

Combining Odometry with Other Sensors

Odometry alone is rarely sufficient for reliable localization. Sensor fusion combines odometry's strengths (high frequency, low cost, no infrastructure) with other sensors that can provide absolute references or independent motion estimates.

Wheel encoders for odometry, Implementation Kinematics Modeling and Odometry of Four Omni Wheel Mobile Robot on The ...

Odometry and Inertial Sensors

IMUs contain accelerometers and gyroscopes that measure linear acceleration and angular velocity. They're useful because they detect motion independently of the wheels.

  • Gyroscopes are particularly helpful for heading estimation, since they measure rotation directly rather than inferring it from wheel speed differences.
  • If the wheels slip, the IMU still reports the robot's actual rotational motion, helping to detect and compensate for slippage.
  • The Extended Kalman Filter (EKF) is commonly used to fuse wheel odometry with IMU data, weighting each source based on its estimated uncertainty.

IMUs have their own drift problem (gyroscope bias), so they don't replace odometry. The two sensors complement each other.

Odometry and GPS Fusion

GPS provides absolute position in outdoor environments, which directly addresses odometry's biggest weakness: the lack of a global reference.

  • GPS updates are relatively slow (typically 1-10 Hz) compared to odometry (often 100+ Hz).
  • Between GPS updates, odometry fills in the gaps with smooth, high-frequency pose estimates.
  • A Kalman filter fuses the two by trusting GPS for absolute position and trusting odometry for short-term motion between GPS fixes.
  • GPS accuracy varies (standard receivers give ~2-5 m accuracy; RTK GPS can reach centimeter level), so the filter needs to account for GPS uncertainty.

Visual Odometry Techniques

Visual odometry uses camera images instead of (or alongside) wheel encoders to estimate motion. The process works by:

  1. Detecting visual features (corners, edges, distinctive patterns) in one camera frame.
  2. Matching those features in the next frame.
  3. Computing the camera's motion between frames using geometric relationships (epipolar geometry).

Visual odometry is especially useful on robots where wheels slip frequently or on legged/flying robots that don't have wheels at all. It works best in environments with rich visual texture and consistent lighting. Combining visual odometry with wheel odometry through sensor fusion gives more robust estimates than either alone.

Sensor Fusion Algorithms

The two most common sensor fusion frameworks in robotics:

  • Extended Kalman Filter (EKF): Maintains a single pose estimate plus a covariance matrix representing uncertainty. Each sensor update refines the estimate based on how uncertain the current estimate is vs. how uncertain the new measurement is. Works well when the system is roughly linear and noise is roughly Gaussian.
  • Particle Filter (Monte Carlo Localization): Represents the pose as a cloud of weighted particles, each one a hypothesis about where the robot might be. Sensor observations shift weight toward particles that match the observations. Better at handling highly non-linear systems or multi-modal distributions (e.g., the robot could be in one of several locations).

Both approaches assign appropriate weight to each sensor based on its reliability, so a noisy sensor contributes less to the final estimate than a precise one.

Odometry in Real-World Applications

Odometry in Mobile Robots

Indoor mobile robots (warehouse robots, vacuum cleaners, service robots) use odometry as their primary short-term motion tracker. It's typically fused with laser scanners, cameras, or IMUs for reliable indoor localization. Outdoor mobile robots add GPS to the mix for global positioning.

Odometry in Autonomous Vehicles

Self-driving cars and autonomous forklifts use wheel encoders and IMUs for high-frequency odometry, providing continuous motion estimates between updates from lidar, cameras, and GPS. Odometry is especially critical during brief GPS outages (tunnels, urban canyons) where it bridges the gap until the global signal returns.

Odometry in Indoor Navigation

Indoor environments lack GPS, making odometry even more important as a baseline motion estimate. To correct for drift indoors, robots often rely on:

  • SLAM (Simultaneous Localization and Mapping) algorithms, which use odometry as the motion model while building and refining a map from sensor observations
  • Wi-Fi fingerprinting or Bluetooth beacons for coarse position corrections
  • Visual markers or fiducial tags for precise, local corrections

Odometry-Based Localization

Odometry-based localization uses odometry as the foundation and layers corrections on top. On its own, odometry gives you a relative pose that drifts. Combined with landmark recognition, map matching, or periodic sensor corrections, it becomes part of a localization system that's both responsive (high update rate from odometry) and accurate (corrections from external references).

This combination is especially valuable in GPS-denied environments like warehouses, mines, or indoor facilities where odometry provides the continuous motion thread that other sensors periodically correct.