I was thinking too hard on the background model, and moreover my initial background was seriously flawed. After a series of experiments, I realized that my previous model picked up only the significantly differing colored pixels, as foreground. If you look at the previous posts, you will see that the system's been picking up the purple and yellow balls. I needed it to pick up humans in the scene, which seem to appear in a shade of black and grey from a distance.
My model had a fundamental problem. I had only three gaussian for the background model. One for each of the color channels (R,G and B), hence justifying its ability to properly detect differing colors.
However, this model had to be changed. Any background model should have three gaussian per pixel of the background that is to be learnt by the machine.
Here is an illustration of how this works, the system computes gaussians for each channel of each pixel of the background model
After the background is learnt (i.e. the single gaussians calculated), an image on which background subtraction is applied, undergoes the following tests as illustrated below:
No comments:
Post a Comment