This was a computer vision project aimed to extract vehicles from the video, taken from a traffic control camera.

It works as follows:

  1. It learns the background from the video. The learning rate decreases as the video advances.
  2. The foreground mask is generated for each frame. Then some noise reduction filters are applied to it.
  3. Connected regions are extracted from the FG mask.
  4. The blob detection system gets the current blobs and previous blobs, and then detects newly entered blobs. It also detects the blobs that has not been seen for too many frames, and deletes them from its database.
  5. The blob ID assignment assigns a unique ID to each blob. The domain of its uniqueness is the video duration. But to use it in a real life traffic camera, the ID database should be cleared in some fix intervals.
  6. The blobs are then passed to the tracking sub-system, which passes each blob into a Kalman filter, calculates an error value based on its trajectory, and then estimates a final position and velocity for each blob.

Below are some sample frames of its result:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s