This was a computer vision project aimed to extract vehicles from the video, taken from a traffic control camera.
- It learns the background from the video. The learning rate decreases as the video advances.
- The foreground mask is generated for each frame. Then some noise reduction filters are applied to it.
- Connected regions are extracted from the FG mask.
- The blob detection system gets the current blobs and previous blobs, and then detects newly entered blobs. It also detects the blobs that has not been seen for too many frames, and deletes them from its database.
- The blob ID assignment assigns a unique ID to each blob. The domain of its uniqueness is the video duration. But to use it in a real life traffic camera, the ID database should be cleared in some fix intervals.
- The blobs are then passed to the tracking sub-system, which passes each blob into a Kalman filter, calculates an error value based on its trajectory, and then estimates a final position and velocity for each blob.
Below are some sample frames of its result: