The Kalman Filter: A Tool to Help Robots See Through the Noise

Daniel Sepulveda Estay, PhD
6 min readApr 25, 2023

--

Imagine you’re in a room filled with fog, and you’re trying to find your way to the door. The fog makes it difficult to see clearly, so you might need to rely on your other senses, like hearing or touch, to help you reach your destination. Now, imagine a robot facing a similar challenge: trying to navigate through a world filled with noise and uncertainty. How can robots make sense of their surroundings and make the right decisions to complete their tasks? This is where the Kalman Filter can help.

The Kalman Filter is a powerful mathematical tool that helps robots and other devices “see through the noise” and make accurate estimates about their surroundings. It combines what a robots already know from their sensors with their understanding of how the world works, even when the information they receive is incomplete or uncertain.

Invented in the early 1960s by Rudolf Kalman, this ingenious algorithm has found its way into countless applications, from guiding spaceships and self-driving cars to tracking objects in computer vision and predicting stock prices. The Kalman Filter is a prime example of how mathematics can help us make sense of a messy, noisy world, and empower our technologies to become smarter and more efficient.

A Kalman filter is an algorithm that helps us estimate the true state of a dynamic system using noisy, incomplete measurements. It is like trying to guess the position and speed of a car when you can only see it through a blurry window.

Here’s a simple explanation of how a Kalman filter works:

  1. Prediction: The Kalman filter starts by making a prediction of the system’s state based on its previous state and any control inputs (e.g., acceleration). This prediction helps us determine where we think the system is going to be at the next time step.
  2. Update: The filter then takes a new measurement, which is a noisy observation of the actual system state (e.g., position and speed of the car). It uses this measurement to update its prediction, combining both the prediction and the new measurement to get a better estimate of the true state.

The Kalman filter repeats these two steps — prediction and update — recursively as new measurements come in. Over time, the filter’s state estimate gets closer and closer to the true state, despite the noise in the measurements.

The magic of the Kalman filter lies in how it combines the prediction and measurement. It intelligently weighs the importance of the prediction and the new measurement based on their uncertainties. If the prediction is more reliable than the measurement, the filter will rely more on the prediction. Conversely, if the measurement is more reliable than the prediction, the filter will rely more on the measurement.

Process step by step

Let’s dive into the mathematical details of the Kalman filter using a simple one-dimensional example.

Imagine you have a car moving in a straight line. The state of the car is given by its position (x) and velocity (v). We denote the state as a vector x = [x, v]. The car is subject to noisy position measurements, and we want to estimate its true state.

Here are the mathematical steps of the Kalman filter:

1.- System dynamics: Define the state transition matrix A, which describes how the state evolves from one time step to another, and the control input matrix B (if applicable). In our example, we have:

A = [[1, Δt],
[0, 1]]

B = [[0.5 * Δt^2],
[ Δt ]]

Where Δt is the time step duration.

2.- Prediction: Predict the next state x_pred based on the current state x and control input u (if applicable) using the system dynamics:

x_pred = A * x + B * u

3.- Covariance prediction: Predict the covariance matrix P_pred of the state estimate uncertainty, taking into account the process noise covariance Q:

P_pred = A * P * A^T + Q

4.- Measurement update: Take a new noisy measurement z, and compute the residual y between the measurement and the predicted state:

y = z - H * x_pred

Here, H is the measurement matrix, which relates the state to the measurement. In our example, we only measure position, so H = [1, 0].

5.- Kalman gain: Compute the Kalman gain K, which determines how much we trust the measurement compared to the prediction:

K = P_pred * H^T * (H * P_pred * H^T + R)^(-1)

Here, R is the measurement noise covariance.

6.- State update: Update the state estimate x using the residual y and the Kalman gain K:

x = x_pred + K * y

7.- Covariance update: Update the covariance matrix P to reflect the updated state uncertainty:

P = (I - K * H) * P_pred

Where I is the identity matrix.

The Kalman filter repeats these steps (prediction and update) recursively as new measurements come in. Over time, the filter’s state estimate gets closer to the true state, despite the noise in the measurements.

A complete example

Let’s implement the one-dimensional Kalman filter example in Python. In this example, we’ll assume a constant velocity model for the car and generate noisy position measurements to estimate the true state (position and velocity) using the Kalman filter.

This code simulates a car moving with a constant velocity, generates noisy position measurements, and then uses the Kalman filter to estimate the position and velocity over time. The results are plotted, showing the true position and velocity, noisy measurements, and the Kalman filter estimates.

import numpy as np
import matplotlib.pyplot as plt

# Time step and total time
dt = 0.1
total_time = 10
timesteps = int(total_time / dt)

# System matrices
A = np.array([[1, dt],
[0, 1]])

# We assume no control input in this example
B = None
u = None

# Measurement matrix
H = np.array([[1, 0]])

# Process noise covariance
Q = np.array([[0.001, 0],
[0, 0.001]])

# Measurement noise covariance
R = np.array([[0.1]])

# Initial state and covariance
x = np.array([[0],
[1]])
P = np.array([[1, 0],
[0, 1]])

# Generate true states and noisy measurements
true_states = np.zeros((timesteps, 2))
measurements = np.zeros((timesteps, 1))

for t in range(timesteps):
true_states[t] = x.flatten()
measurements[t] = H @ x + np.random.normal(0, np.sqrt(R))
x = A @ x

# Kalman filter implementation
def kalman_filter(A, H, Q, R, z, x, P):
# Prediction
x_pred = A @ x
P_pred = A @ P @ A.T + Q

# Update
y = z - H @ x_pred
K = P_pred @ H.T @ np.linalg.inv(H @ P_pred @ H.T + R)
x = x_pred + K @ y
P = (np.eye(2) - K @ H) @ P_pred

return x, P

# Apply the Kalman filter
estimated_states = np.zeros((timesteps, 2))

for t in range(timesteps):
z = measurements[t].reshape(-1, 1)
x, P = kalman_filter(A, H, Q, R, z, x, P)
estimated_states[t] = x.flatten()

# Plot results
plt.plot(true_states[:, 0], label='True position')
plt.plot(measurements, label='Noisy measurements', linestyle='--', alpha=0.5)
plt.plot(estimated_states[:, 0], label='Estimated position')
plt.xlabel('Time step')
plt.ylabel('Position')
plt.legend()
plt.title('Kalman Filter: Position Estimation')
plt.show()

plt.plot(true_states[:, 1], label='True velocity')
plt.plot(estimated_states[:, 1], label='Estimated velocity')
plt.xlabel('Time step')
plt.ylabel('Velocity')
plt.legend()
plt.title('Kalman Filter: Velocity Estimation')
plt.show()

This code results in the following output:

Sign up to discover human stories that deepen your understanding of the world.

--

--

Daniel Sepulveda Estay, PhD
Daniel Sepulveda Estay, PhD

Written by Daniel Sepulveda Estay, PhD

I am an engineer and researcher specialized in the operation and management of supply chains, their design, structure, dynamics, risk and resilience

No responses yet

Write a response