Content tagged computer-vision

Intro

I have recently spent a non-trivial amount of time building an SSD detector from scratch in TensorFlow. I had initially intended for it to help identify traffic lights in my team's SDCND Capstone Project. However, it turned out that it's not particularly efficient with tiny objects, so I ended up using the TensorFlow Object Detection API for that purpose instead. In the end, I managed to bring my implementation of SSD to a pretty decent state, and this post gathers my thoughts on the matter. It is not intended to be a tutorial. Instead, it's a discussion of all the pieces of information that were unclear to me or that I needed to research independently of the original paper.

Object Detection
Object Detection

Base Network and Extensions

SSD-VGG300
SSD-VGG300

SSD is based on a modified VGG-16 network pre-trained on the ImageNet data. I happened to have one from one of my previous projects, and I used it here as well. The following modifications have been made to the base network:

  • pool5 was changed from 2x2 (stride: 2) to 3x3 (stride: 1)
  • fc6 and fc7 were converted to convolutional layers and subsampled
  • à trous convolution was used in fc6
  • fc8 and all of the dropout layers were removed

As you can see from the above image, the fc6 and fc7 convolutions are 3x3x1024 and 1x1x1024 respectively, whereas in the original VGG they are 7x7x4096 and 1x1x4096. Having huge filters like these is a computational bottleneck. According to one of the references, we can address this problem by "spatially subsampling (by simple decimation)" the weights and then using the à trous convolution to keep the filter's receptive field unchanged. It was not immediately clear to me what it means, but after reading this page of MatLab's documentation, I came up the following:

with tf.variable_scope('mod_conv6'):
    orig_w, orig_b = sess.run([self.vgg_fc6_w, self.vgg_fc6_b])
    mod_w = np.zeros((3, 3, 512, 1024))
    mod_b = np.zeros(1024)

    for i in range(1024):
        mod_b[i] = orig_b[4*i]
        for h in range(3):
            for w in range(3):
                mod_w[h, w, :, i] = orig_w[3*h, 3*w, :, 4*i]

    w = array2tensor(mod_w, 'weights')
    b = array2tensor(mod_b, 'biases')
    x = tf.nn.atrous_conv2d(self.mod_pool5, w, rate=6, padding='SAME')
    x = tf.nn.bias_add(x, b)
    self.mod_conv6 = tf.nn.relu(x)

It doubled the speed of training and did not seem to have any adverse effects on accuracy. Note that the dilation rate of the à trous convolution is set to 6 instead of 3. This setting is inconsistent with the size of the original filter, but it is nonetheless used in the reference code.

The output of the conv4_3 layer differs in magnitude compared to other layers used as feature maps of the detector. As pointed out in the ParseNet paper, this fact may lead to reduced performance because "larger" features may overwhelm the "smaller" ones. They propose to use L2 normalization with a scale learnable separately for each channel as a remedy to this problem. This is what I ended up doing in TensorFlow:

def l2_normalization(x, initial_scale, channels, name):
    with tf.variable_scope(name):
        scale = array2tensor(initial_scale*np.ones(channels), 'scale')
        x = scale*tf.nn.l2_normalize(x, dim=-1)
    return x

The initial scale for each channel is set to 20, and it does not change very much over the training time.

Furthermore, a bunch of extra convolutional layers were added on top of the modified fc7. The number of these layers depends on the flavor of the detector: vgg300 or vgg512. The paper does not explain well enough the parameters of the convolutions, especially the padding settings, even though getting this part wrong can significantly impact the performance. I looked these up in the reference code for vgg300 and worked my way backward from the number of anchors in the case of vgg512. Here's what I ended up with:

  • conv8_1: 1x1x256 (stride: 1, pad: same)
  • conv8_2: 3x3x512 (stride: 2, pad: same)
  • conv9_1: 1x1x128 (stride: 1, pad: same)
  • conv9_2: 3x3x256 (stride: 2, pad: same)
  • conv10_1: 1x1x128 (stride: 1, pad: same)
  • conv10_2: 3x3x256 (stride: 1, pad: valid) for vgg300, (stride: 2, pad: same) for vgg515
  • conv11_1: 1x1x128 (stride: 1, pad: same)
  • conv11_2: 3x3x256 (stride: 1, pad: valid)

For the vgg512 flavor, there are two extra layers:

  • conv12_1: 1x1x128 (stride: 1, pad: same)
  • padding of the conv12_1 feature map with one extra cell in each spacial dimension
  • conv12_2: 3x3x256 (stride: 1, pad: valid)

It's not possible to use the predefined padding options (VALID or SAME) for extending conv12_1, so I ended doing it manually:

x, l2 = conv_map(self.ssd_conv11_2, 128, 1, 1, 'conv12_1')
paddings = [[0, 0], [0, 1], [0, 1], [0, 0]]
x = tf.pad(x, paddings, "CONSTANT")
self.ssd_conv12_1 = self.__with_loss(x, l2)
x, l2 = conv_map(self.ssd_conv12_1, 256, 3, 1, 'conv12_2', 'VALID')
self.ssd_conv12_2 = self.__with_loss(x, l2)

Default Boxes (a. k. a. Anchors)

Default Boxes
Default Boxes

The model takes the outputs of some of these convolutional layers and associates a scale with each of them. The exact formula is presented in the paper; the reference implementation does not seem to follow it exactly, though. In general, the further away the feature map is from the input, the larger is the scale assigned to it. The scale only loosely correlates with the receptive field of the respective filter.

The model adds a bunch of 3x3xp convolutional filters on top of each of these maps. Each of these filters predicts p parameters of a default box (or an anchor) at the location to which it is applied. Four of these p parameters are the coordinates of the window (relative width and height, as well as x and y offsets from the center of the anchor). The remaining parameters define the probability distribution of the box belonging to one of the classes that the model predicts (the softmaxed logits). We need to add as many of these filters per feature map as we want aspect ratios for the default boxes of a given scale. In general, the more, the better. The paper advises using six aspect ratios per map. However, the implementation uses fewer of them in some cases.

We now need to create the ground truth labels for the optimizer. We match each ground truth box to an anchor box with the highest Jaccard overlap (if it exceeds 0.5). Additionally, we match it to every anchor with overlap higher than 0.5. The original code uses a mixture of bipartite matching and maximum overlap to resolve conflicts, but I just used the latter criterion for simplicity. For every matched anchor we set the class label accordingly and use the following for the box parameters:

\[ w = 10 \cdot log(\frac{w_{gt}}{w_{a}}) \\ h = 10 \cdot log(\frac{h_{gt}}{h_{a}}) \\ x_c = 5 \cdot \frac{x_{c,gt} - x_{c,a}}{w_a} \\ y_c = 5 \cdot \frac{y_{x,gt} - y_{c,a}}{h_a} \]

The code uses the scaling constants above (5, 10) and calls them "prior variance," but the paper does not mention this fact.

Training Objective

The loss function consists of three parts:

  • the confidence loss
  • the localization loss
  • the l2 loss (weight decay in the Caffe parlance)

The confidence loss is what TensorFlow calls softmax_cross_entropy_with_logits, and it's computed for the class probability part of the parameters of each anchor. Since there are many more positive (matched) anchors than negative (unmatches/background) ones, the learning ends up being more stable if not every background score contributes to the final loss. We need to mine the scores of all the positive anchors and at most three times as of many negative anchors. We only use the background anchors with the highest confidence loss. It results in a somewhat involved code in the declarative style of TensorFlow.

The localization loss sums up the Smooth L1 losses of differences between the prediction and the ground truth labels. The Smooth L1 loss is defined as follows:

\[ SmoothL1(x) = \begin{cases} |x| - 0.5 & x \geq 1 \\ 0.5 \cdot x^2 & x \lt 1 \\ \end{cases} \]

It translates to the following code in TensorFlow:

def smooth_l1_loss(x):
    square_loss   = 0.5*x**2
    absolute_loss = tf.abs(x)
    return tf.where(tf.less(absolute_loss, 1.), square_loss, absolute_loss-0.5)

The paper advises using the batch size of 32. However, this recommendation assumes training in parallel on four GPUs. If you have just one (like I do), 8 is a better number. The original code uses the SGD optimizer with momentum, rate decay at predefined steps, and doubling of the rate for biases. I found that using the Adam optimizer with the exponential decay rate of 0.97 per epoch and using 0.1 as the stability constant (epsilon) works better for this implementation. The TensorFlow documentation warns that the default epsilon may not be a good choice in general and recommends using a higher value in some cases. Indeed, I found that using the default makes the weights very small very fast and the learning process becomes unstable.

Non-Maximum Suppression

Because of the anchor matching strategy and the vast irregularity of the shapes we train on, the network will produce multiple overlapping detections of the same object. One way to get rid of duplicates is to perform a non-maxima suppression. The algorithm is straightforward:

  • you pick your favorite box
  • you remove all the boxes that have the Jaccard overlap with your selection above a certain threshold
  • you choose your second favorite box and repeat step 2
  • you continue until there is no new favorite to select

This article provides a more detailed description, although their selection criterion is rather strange (the position of the lower-right corner) and the implementation is pretty inefficient. My code using numpy's bulk operations is here. I should reimplement it using TensorFlow tensors and will likely do that when I have a spare moment.

Data Augmentation and Issues with Parallelism in Python

The SSD training depends heavily on data augmentation. I won't describe it at all here because the paper does a great job at that. The only tricky part that it does not mention is the fact that you do not clip any ground truth box if it happens to span outside the boundaries of a subsampled input image. See transforms.py if you want more details.

Things run much faster when the data is preprocessed in parallel before being fed to TensorFlow. However, the poor support for multithreading/multiprocessing in Python turned out to be a significant obstacle here. As you probably know, running your computation in multiple threads is utterly pointless in Python because the execution ends up being serial due to GIL issues. The GIL problem is typically addressed with multiprocessing. However, it comes with a separate can of worms.

First, if you want to transfer any significant amount of data between the processes efficiently, you need to avoid pickling and use the POSIX shared memory instead. It's not hugely complicated, but it's not trivial either. Second, if any of the packages you import uses threading underneath, you're almost guaranteed to encounter fork-safety issues. Add strange errors while forking CUDA-enabled libraries to the mix and you end up with a minor horror story. It took me about a full day of work to write and debug the shared memory queue and to debug the fork safety issues in the pipeline. In case you wonder, this code does the trick for the latter:

workers = []
os.environ['CUDA_VISIBLE_DEVICES'] = ""
cv2_num_threads = cv2.getNumThreads()
cv2.setNumThreads(1)
for i in range(num_workers):
    args = (sample_queue, batch_queue)
    w = mp.Process(target=batch_producer, args=args)
    workers.append(w)
    w.start()
del os.environ['CUDA_VISIBLE_DEVICES']
cv2.setNumThreads(cv2_num_threads)

Pascal VOC and the mAP Metric

The Pascal VOC (Visual Object Classes) project provides standardized datasets for object class recognition as well as tools for evaluation and comparison of different detection methods. The datasets contain several thousands of annotated Flickr pictures. The metric they use for method comparison of object detection algorithms is called mAP - Mean Average Precision - and is an arithmetic mean of the AP (Average Precision) scores for each object class in the dataset.

The task of object detection is treated as a ranked document retrieval system (as in search) and the AP metric is an 11-point interpolated average precision. More specifically, the system:

  • sorts the detections of a given class in all the images of the dataset by confidence in descending order
  • loops over the detections and classifies them according to the following greedy algorithm:
    • if a detection overlaps with the ground truth object with the IoU score of 50% or more and the object has not been previously detected, it's a true positive
    • if IoU is above 50% but the object has been detected before, or the IoU is below 50%, it's a false positive
    • ground truth object with no matching detections are false negatives
    • calculate the precision and recall for the current state

Precision and recall data points calculated at each iteration contribute to the precision vs. recall curve which is then interpolated according to the following formula, sampled at 11 equally spaced recall points between 0 and 1, and averaged.

\[ p_{interp}(r) = \max_{r' \geq r} p(r') \]

The graph below shows what the curves for the bottle class look like when we decide to accept objects above different confidence thresholds. Note how the curves for lower confidence levels extend the ones for the higher levels.

Precision vs. Recall - Bottle class
Precision vs. Recall - Bottle class

Here are the AP values for the corresponding confidence thresholds:

Confidence AP
0.01 0.497
0.10 0.471
0.30 0.353
0.50 0.270

The lower confidence results we're willing to accept, the higher our AP gets, but also the number of low confidence false positives grows. It makes perfect sense for a ranked document retrieval system. We care a lot whether we get only the relevant results in the first couple of pages of a Google search, but we don't care all that much if we have a bunch of false positives on the hundredth page. Does it make sense when it comes to object detection? That probably varies widely depending on your application. I would argue that, in a general case, when you just care about quality detections, it's somewhat confusing. Below are examples of detections in the same picture with boxes above 0.5 and 0.01 confidence levels coming from the same SSD model. The parameters used to produce the second picture score higher mAP over the entire dataset than the ones used to generate the first one.

Detections above 0.5 confidence
Detections above 0.5 confidence

Detections above 0.01 confidence
Detections above 0.01 confidence

You can get more info about it here.

Results

I trained a somewhat modified version of the vgg300 flavor of the detector on the union of VOC2007+VOC2012 trainval and VOC2007 test samples with heavy data augmentation. It scored 74.7% mAP when tested on the samples it trained on, while the reference score is around 77.5%. The result on the VOC2012 test samples was 68.71% with the reference at 75.8%. I did not use the same aspect ratio and scale settings as the ones utilized by the original implementation. Surprisingly, sticking to the reference parameters produced even worse results. Another reason for the discrepancy may be a different choice of the optimizer and the fact that the reference implementation doubles the learning rate for biases. Using different learning rates for different variables is possible in TensorFlow. However, I have not been able to do that without the system repeating the forward pass and most of the backward pass for each learning rate setting. It effectively almost doubled the training time per epoch, and I was not patient enough to wait for the results.

When I exported the model as a static inference graph, it took roughly 100MB, compared to around 1.3GB when in the checkpoint format. I then used it as a detector in the vehicle detection project I did some time ago. It processed 1261 frames of the testing video, including the FFmpeg compression and decompression time, in roughly 25 seconds reaching over 50 FPS on average. It's a blazing speed considering that my fairly inefficient SVM implementation took well over 8 minutes (~2.5 FPS) to process the same video. Note, however, that, due to the non-maximum suppression, the speed is a function of the number of positive predictions, and this video has relatively few detected objects. You can see the results below.

Vehicle detection with SSD

Conclusion

The project took quite a bit longer than I had initially anticipated but it was a great learning experience and ultimately a great deal of fun. With the hard negative mining, it was probably the most complicated loss function I have implemented in TensorFlow to date. I learned about adaptive feature map scaling, dug through a lot of Caffe's and TensorFlow's source code, learned about the stability of AdamOptimizer, and read a whole bunch of deep learning research papers. I wasted some time fighting mostly non-existent issues because I had not initially paid sufficient attention to what is measured by the accuracy metric. I have a bunch of ideas on how to improve the model to reach the reference performance and I will likely try some of them out in the near future.

All my code is here.

What is it about?

Semantic segmentation is a process of dividing an image into sets of pixels sharing similar properties and assigning to each of these sets one of the pre-defined labels. Ideally, you would like to get a picture such as the one below. It's a result of blending color-coded class labels with the original image. This sample comes from the CityScapes dataset.

Segmented Image
Segmented Image

Segmentation Classes
Segmentation Classes

How is it done?

Figuring out object boundaries in an image is hard. There's a variety of "classical" approaches taking into account colors and gradients that obtained encouraging results, see this paper by Shi and Malik for example. However, in 2015 and 2016, Long, Shelhamer, and Darrell presented a method using Fully Convolutional Networks that significantly improved the accuracy (the mean intersection over union metric) and the inference speed. My goal was to replicate their architecture and use it to segment road scenes.

A fully convolutional network differs from a regular convolutional network in the fact that it has the final fully-connected classifier stripped off. Its goal is to take an image as an input and produce an equally-sized output in which each pixel is represented by a softmax distribution describing the probability of this pixel belonging to a given class. I took this picture from one of the papers mentioned above:

Fully Convolutional Network
Fully Convolutional Network

For the results presented in this post, I used the pre-trained VGG16 network provided by Udacity for the beta test of their Advanced Deep Learning Capstone. I took layers 3, 4, and 7 and combined them in the manner described in the picture below, which, again, is taken from one of the papers by Long et al.

Upscaling and merging
Upscaling and merging

First, I used a 1x1 convolutions on top of each extracted layer to act as a local classifier. After doing that, these partial results are still 32, 16, and 8 times smaller than the input image, so I needed to upsample them (see below). Finally, I used a weighted addition to obtain the result. The authors of the original paper report that without weighting the learning process diverges.

Learnable Upsampling

Upsampling is done by applying a process called transposed convolution. I will not describe it here because this post over at cv-tricks.com does a great job of doing that. I will just say that transposed convolutions (just like the regular ones) use learnable weights to produce output. The trick here is the initialization of those weights. You don't use the truncated normal distribution, but you initialize the weights in such a way that the convolution operation performs a bilinear interpolation. It's easy and interesting to test whether the implementation works correctly. When fed an image, it should produce the same image but n times larger.

 1 img = cv2.imread(sys.argv[1])
 2 print('Original size:', img.shape)
 3 
 4 imgs = np.zeros([1, *img.shape], dtype=np.float32)
 5 imgs[0,:,:,:] = img
 6 
 7 img_input = tf.placeholder(tf.float32, [None, *img.shape])
 8 upscale = upsample(img_input, 3, 8, 'upscaled')
 9 
10 with tf.Session() as sess:
11     sess.run(tf.global_variables_initializer())
12     upscaled = sess.run(upscale, feed_dict={img_input: imgs})
13 
14 print('Upscaled:', upscaled.shape[1:])
15 cv2.imwrite(sys.argv[2], upscaled[0,:, :, :])

Where upsample is defined here.

Datasets

I was mainly interested in road scenes, so I played with the KITTI Road and CityScapes datasets. The first one has 289 training images with two labels (road/not road) and 290 testing samples. The second one has 2975 training, 500 validation, and 1525 testing pictures taken while driving around large German cities. It has fine-grained annotations for 29 classes (including "unlabeled" and "dynamic"). The annotations are color-based and look like the picture below.

Picture Labels
Picture Labels

Even though I concentrated on those two datasets, both the training and the inference software is generic and can handle any pixel-labeled dataset. All you need to do is to create a new source_xxxxxx.py file defining your custom samples. The definition is a class that contains seven attributes:

  • image_size - self-evident, both horizontal and vertical dimensions need to be divisible by 32
  • num_classes - number of classes that the model is supposed to handle
  • label_colors - a dictionary mapping a class number to a color; used for blending of the classification results with input image
  • num_training - number of training samples
  • num_validation - number of validation samples
  • train_generator - a generator producing training batches
  • valid_generator - a generator producing validation batches

See source_kitti.py or source_cityscapes.py for a concrete example. The training script picks the source based on the value of the --data-source parameter.

Normalization

Typically, you would normalize the input dataset such that its mean is at zero and its standard deviation is at one. It significantly improves convergence of the gradient optimization. In the case of the VGG model, the authors just zeroed the mean without scaling the variance (see section 2.1 of the paper). Assuming that the model was trained on the ImageNet dataset, the mean values for each channel are muR = 123.68, muG = 116.779, muB = 103.939. The pre-trained model provided by Udacity already has a pre-processing layer handling these constants. Judging from the way it does it, it expects plain BGR scaled between 0 and 255 as input.

Label Validation

Since the network outputs softmaxed logits for each pixel, the training labels need to be one-hot encoded. According to the TensorFlow documentation, each row of labels needs to be a proper probability distribution. Otherwise, the gradient calculation will be incorrect and the whole model will diverge. So, you need to make sure that you're never in a situation where you have all zeros or multiple ones in your label vector. I have made this mistake so many time that I decided to write a checker script for my data source modules. It produces examples of training images blended with their pixel labels to check if the color maps have been defined correctly. It also checks every pixel in every sample to see if the label rows are indeed valid. See here for the source.

Initialization of variables

Initialization of variables is a bit of a pain in TensorFlow. You can use the global initializer if you create and train your model from scratch. However, in the case when you want to do transfer learning - load a pre-trained model and extend it - there seems to be no convenient way to initialize only the variables that you created. I ended up doing acrobatics like this:

1 uninit_vars    = []
2 uninit_tensors = []
3 for var in tf.global_variables():
4     uninit_vars.append(var)
5     uninit_tensors.append(tf.is_variable_initialized(var))
6 uninit_bools = sess.run(uninit_tensors)
7 uninit = zip(uninit_bools, uninit_vars)
8 uninit = [var for init, var in uninit if not init]
9 sess.run(tf.variables_initializer(uninit))

Training

For training purposes, I reshaped both labels and logits in such a way that I ended up with 2D tensors for both. I then used tf.nn.softmax_cross_entropy_with_logits as a measure of loss and used AdamOptimizer with a learning rate of 0.0001 to minimize it. The model trained on the KITTI dataset for 500 epochs - 14 seconds per epoch on my GeForce GTX 1080 Ti. The CityScapes dataset took 150 epochs to train - 9.5 minutes per epoch on my GeForce vs. 25 minutes per epoch on an AWS P2 instance. The model exhibited some overfitting. However, the visual results seemed tighter the more it trained. In the picture below the top row contains the ground truth, the bottom one contains the inference results (TensorBoard rocks! :).

CityScapes Validation Examples
CityScapes Validation Examples

CityScapes Validation Loss
CityScapes Validation Loss

CityScapes Training Loss
CityScapes Training Loss

Results

The inference (including image processing) takes 80 milliseconds per image on average for CityScapes and 27 milliseconds for KITTI. Here are some examples from both datasets. The model seems to be able to distinguish a pedestrian from a bike rider with some degree of accuracy, which is pretty impressive!

CityScapes Example #1
CityScapes Example #1

CityScapes Example #2
CityScapes Example #2

KITTI Example #1
KITTI Example #1

KITTI Example #2
KITTI Example #2

Go here for the full code.

The challenge

Here's another cool project I have done as a part of the Udacity's self-driving car program. There were two problems solve. The first one was to find the lane lines and compute some of their properties. The second one was to detect and draw bounding boxes around nearby vehicles. Here's the result I got:

Detecting lane lines and vehicules

Detecting lanes

The first thing I do after correcting for camera lens distortion is applying a combination of Sobel operators and color thresholding to get an image of edges. This operation makes lines more pronounced and therefore much easier to detect.

Edges
Edges

I then get a birds-eye view of the scene by applying a perspective transform and produce a histogram of all the white pixels located in the bottom half of the image. The peaks in this histogram indicate the presence of mostly vertical lines, which is what we're looking for. I detect all these lines by using a sliding window search. I start at the bottom of the image and move towards the top adjusting the horizontal position of each successive window to the average of the x coordinate of all the pixels contained in the previous one. Finally, I fit a parabola to all these pixels. Out of all the candidates detected this way, I select a pair that is the closest to being parallel and is roughly in the place where a lane line would be expected.

The orange area in the picture below visualizes the histogram, and the red boxes with blue numbers in them indicate the positions of the peaks found by the find_peaks_cwt function from scipy.

Bird's eye view - histogram search
Bird's eye view - histogram search

Once I have found the lanes in one video frame, locating them in the next one is much simpler - their position did not change by very much. I just take all the pixels from a close vicinity of the previous detection and fit a new polynomial to them. The green area in the image below denotes the search range, and the blue lines are the newly fitted polynomials.

Bird's eye view - vicinity search
Bird's eye view - vicinity search

I then use the equations of the parabolas to calculate the curvature. The program that produced the video above uses cross-frame averaging to make the lines smoother and to vet new detections in successive video frames.

Vehicle detection

I detect cars by dividing the image into a bunch of overlapping tiles of varying sizes and running each tile through a classifier to check if it contains a car or a fraction of a car. In this particular solution, I used a linear support vector machine (LinearSVC from sklearn). I also wrapped it in a CalibratedClassifierCV to get a measure of confidence. I rejected predictions of cars that were less than 85% certain. The classifier trained on data harvested from the GTI, KITTI, and Udacity datasets from which I collected around 25 times more background samples than cars to limit the occurrences of false-positive detections.

As far as image features are concerned, I use only Histograms of Oriented Gradients with parameters that are essentially the same as the ones presented in this paper dealing with detection of humans. I used OpenCV's HOGDescriptor to extract the HOGs. The reason for this is that it can compute the gradients taking into account all of the color channels. See here. It is the capability that other libraries typically lack limiting you to a form of grayscale. The training set consists of roughly 2M images of 64 by 64 pixels.

Tiles containing cars
Tiles containing cars

Since the samples the classifier trains on contain pictures of fractions of cars, the same car is usually detected multiple times in overlapping tiles. Also, the types of background differ quite a bit, and it's hard to find images of all the possible things that are not cars. Therefore false-positives are quite frequent. To combat these problems, I use heat maps that are averaged across five video frames. Every pixel that has less than three detections on average per frame is rejected as a false positive.

Heat map
Heat map

I then use OpenCV's connectedComponentsWithStats to find connected components and get centroids and bounding boxes for the detections. The centroids are used to track the objects across frames and smooth the bounding boxes by averaging them with 12 previous frames. To further reject false-positives, an object needs to be classified as a car in at least 6 out of 12 consecutive frames.

Conclusions

The topic is pretty fascinating and the results I got could be significantly improved by:

  • employing smarter sliding window algorithms (i.e., having momentum) to better detect dashed lines that are substantially curved
  • finding better ways to do perspective transforms
  • using a better classifier for cars (a deep neural network perhaps)
  • using techniques like YOLO
  • using something smarter than strongly connected components to distinguish overlapping detections of different vehicles - mean shift clustering comes to mind
  • making performance improvements here and there (use C++, parallelize video processing and so on)

I learned a lot of computer vision techniques and had plenty of fun doing this project. I also spent a lot of time reading the code of OpenCV. It has a lot of great tutorials, but its API documentation is lacking.

Video Link

Pretty interesting talk on how to prevent squirrels from stealing bird food using python and computer vision.

Steps to recognize a squirrel on a picture:

  • Subtract background.
    • Compute average value of each pixel over time to build a background profile.
  • Detect blobs.
  • Discriminate blobs. Distinguish between squirrels from other things. The author used support vector machines.
    • Blob size
    • Color histograms
    • Entropy detection (squirrel tail)

Other interesting stuff mentioned: