An Extended Kalman Filter (EKF) is a nonlinear version of the Kalman Filter. It estimates the hidden state of a system when the motion model or measurement model is nonlinear.

Common robotics use case: estimate vehicle position, velocity, heading, sensor biases, and orientation from noisy sensors like IMU, GPS, compass, airspeed, wheel odometry, camera measurements, and barometer readings.

Core idea

A normal Kalman Filter assumes the system is linear:

EKF handles nonlinear models:

where:

  • is the state at time
  • is the control input
  • is the measurement
  • is the nonlinear motion model
  • is the nonlinear measurement model
  • is process noise
  • is measurement noise

The EKF keeps a Gaussian belief over the state:

where is the best state estimate and is the covariance matrix measuring uncertainty.

Linearization

The main trick in EKF is local linearization.

Even if and are nonlinear, EKF approximates them as linear near the current estimate using the first-order Taylor expansion. This requires Jacobian matrices:

  • describes how small state errors evolve through the motion model
  • describes how small state errors affect the predicted measurement

Prediction step

Use the motion model to predict the next state:

Propagate uncertainty through the linearized dynamics:

Intuition:

  • moves the state estimate forward
  • moves the uncertainty forward
  • adds uncertainty from imperfect dynamics and unmodeled forces

Update step

Predict what the sensor should read:

Compute the innovation, or measurement residual:

Compute innovation covariance:

Compute Kalman gain:

Update the state estimate:

Update covariance:

The Joseph form is more numerically stable:

What the matrices mean

  • : current uncertainty in the state estimate
  • : process noise, how much you distrust the motion model
  • : measurement noise, how much you distrust the sensor
  • : local linear approximation of the dynamics
  • : local linear approximation of the measurement model
  • : uncertainty of the measurement residual
  • : Kalman gain, how strongly the estimate should move toward the measurement

If is large, the filter trusts the sensor less. If is large, the filter trusts the model less and adapts faster to measurements.

Algorithm

  1. Start with an initial estimate and covariance
  2. Predict the next state using
  3. Compute the Jacobian
  4. Predict covariance using
  5. Predict measurement using
  6. Compute the Jacobian
  7. Compare real measurement with predicted measurement
  8. Use Kalman gain to correct the state and covariance
  9. Repeat for every timestep

Important implementation details

  • Angle residuals must be wrapped, for example to
  • Quaternions should be normalized after updates
  • and need realistic tuning; bad noise matrices cause bad estimates
  • should stay symmetric and positive semi-definite
  • Units must be consistent across state, model, and sensor measurements
  • Bad initial guesses can cause divergence because the linearization is only local

When EKF works well

EKF works well when:

  • the nonlinear functions are smooth and differentiable
  • the estimate is already close to the true state
  • the system is only mildly nonlinear over one timestep
  • noise is roughly Gaussian
  • the Jacobians are correct

EKF struggles when the state uncertainty is very large, the system is highly nonlinear, or the belief is not well represented by one Gaussian.

EKF vs regular Kalman Filter

FilterModel typeMain assumption
Kalman FilterLinearExact Gaussian belief update
Extended Kalman FilterNonlinearApproximate update by local linearization
Unscented Kalman FilterNonlinearApproximate nonlinear belief transform using sigma points

The EKF is popular because it is efficient and relatively simple, but its quality depends heavily on the linearization and noise tuning.