Sunday, August 27, 2017

Vehicle Detection with Dlib 19.5


Dlib v19.5 is out and there are a lot of new features. There is a dlib to caffe converter, a bunch of new deep learning layer types, cuDNN v6 and v7 support, and a bunch of optimizations that make things run faster in different situations, like ARM NEON support, which makes HOG based detectors run a lot faster on mobile devices.

However, the coolest and most requested feature has been an upgrade to the CNN+MMOD object detector to support detecting things with varying aspect ratios. The previous version of the detector required the training data to consist of objects that all had essentially the same aspect ratio. This is fine for tasks like face detection and dog hipsterization, but obviously not as general as you would like.

So dlib v19.5 includes an updated version of the MMOD loss layer that can be used to learn an object detector from a dataset with any mixture of bounding box shapes and sizes. To demo this new feature, I used the new MMOD code to create a vehicle detector, which you can see running on these videos. This detector is trained to find cars moving with you in traffic, and therefore cars where the rear end of the vehicle is visible.




The detector is just as fast as previous versions of the CNN+MMOD detector. For instance, when I run it on my NVIDIA 1080ti I can process 39 frames per second when processing them individually and 93 frames per second when processing them grouped into batches. This assumes a frame size of 928x478.

If you want to run this detector yourself you can check out the new example program that does just that. The detector was trained on a modest dataset of 2217 images, which is also available, as is the training code. Both these new example programs contain a lot of information about training this kind of detector and are worth reading if you want to understand the details involved. However, we can go into a short description here to understand how the detector works.


Take this image as an example. I ran the new vehicle detector on it and plotted the resulting detections as red boxes. So what are the processing steps that go from the raw image to the 6 boxes?  To roughly summarize, they are:
  1. Create an image pyramid and pack the pyramid into one big image. Let's call this the "tiled pyramid"
  2. Run the tiled pyramid image through a CNN. The CNN outputs a new image where bright pixels in the output image indicate the presence of cars.
  3. Find pixels in the CNN's output image with a value > 0. Those locations are your preliminary car detections.
  4. Perform non-maximum suppression on the preliminary detections to produce the final output.
Steps 3 and 4 are pretty straightforward. It's the first two steps that are complicated. So to understand them, let's visualize the outputs of these first two steps. All step 1 does is call dlib::create_tiled_pyramid on the input image to produce this new image:


What's special about this image is that we don't need to worry about scale anymore. That is, suppose we have a detection algorithm that can find cars, but it only knows how to find cars of a certain size. No problem. When you run it on this tiled pyramid image you are going to find each car somewhere in it at the scale your detector expects. Moreover, the tiled pyramid is only about 3.7 times larger than the original image, so processing it instead of the raw image gives us full scale invariance for only a 3.7x increase in computational cost. That's a very reasonable trade. Moreover, tiling it inside a rectangular image makes it very easy to process using normal CNN tooling on a GPU and still get full GPU speeds. 

Now for step 2. The CNN takes the tiled pyramid as input, does a bunch of convolutions, and outputs a new set of images. In the case of our vehicle detector, it outputs 3 new images, each is a detection strength map that gets "hot" in locations likely to contain a vehicle. The reason there are 3 images for the vehicle detector is because there are, roughly, 3 different aspect ratios (tall and skinny e.g. semi trucks, short and wide e.g. sedans, and squarish e.g. SUVs). For purposes of display, I have combined the 3 images into one by taking the pointwise max of the 3 original images.  You can see this combined image below. The dark blue areas are places the CNN is saying "definitely not a vehicle" and the bright red locations are the positions it thinks contain a vehicle.


If we overlay this CNN output on top of the tiled pyramid you can see it's doing the right thing. The cars get bright red dots on them, right in the centers of the cars. Moreover, you can tell that the CNN is only detecting cars at a certain scale. The smaller cars are detected at the top of the pyramid and only as we progress down the pyramid does it begin to detect the larger cars.


After the CNN output is obtained, all the detection code needs to do is threshold the CNN output, find all the hot spots, apply non-max suppression, and output the boxes corresponding to the identified hot spots. And that's it, that's all the CNN+MMOD detector is doing.

On the other hand, describing how the CNN is trained is more complicated.  The code in dlib uses the usual stochastic gradient descent methods. You can see many of the details if you read the dlib DNN example programs.  How deep learning works in general is a big topic, but the thing most interesting here is the MMOD loss layer.  For the gory details on that I refer you to the MMOD paper which explains the loss function.  In the paper it is discussed in the context of networks that are linear in their parameters rather than non-linear in their parameters, as is our CNN here. However, for understanding the loss the difference between linear vs. non-linear is a minor detail. In fact, the loss equations are the same for both cases. The only difference is what kind of optimization algorithms are available for each case.  In the linear parameter case you can write a fancy numeric solver capable of solving the problem in a few minutes, but with a non-linear parameterization you have to resort to brute force SGD and GPUs running for many hours.  

But at a very high level, it's running the entire detection process over and over during training, counting the number of detection mistakes (false alarms, missed detections, and duplicate detections), and back-propagating that error gradient through the CNN until the CNN stops messing up. Also, since the MMOD loss layer is counting mistakes after non-max suppression is applied, it knows that it needs to get the CNN to avoid producing high outputs in parts of the image that won't be suppressed by non-max suppression. This is why you see the dark blue areas of "definitely not a car" surrounding each of the car detections. The CNN has learned that it needs to be very careful on the border between "it's a car" and "it's not a car" to avoid accidentally detecting the same car multiple times. 

This is perhaps easiest to see if we merge the pyramid layers back into the original image. If we make an image where the pixel value is the max over all scales in the pyramid we get this image:


Here you can clearly see the 6 car hotspots and the dark blue areas of "not a car" immediately surrounding them. Finally, overlaying this on the original image gives this wonderful image:




20 comments :

Yili Zhao said...

Hi Davis,
the updated CNN+MMOD detector to support detecting objects with varying aspect ratios IS REALLY COOL! My dataset has objects with different aspect ratios, and I will definitely try the new detector.
Thanks for this great feature!

Davis King said...

Thanks, glad you like dlib :)

Filipe Trocado Ferreira said...

How do you generate bounding boxes from the heatmap?

Davis King said...

The boxes are centered on the bright spots. The scale (i.e. size) of a box is determined by which level of the pyramid contained the bright spot. The aspect ratio is determined by which of the heatmaps contained the spot in the first place. Recall that the CNN outputs multiple heatmaps, one for each possible aspect ratio.

Filipe Trocado Ferreira said...

Got it, yeah, thanks. Great work one more time. Thinking of a python binding for this?

Davis King said...

Thanks. Yeah I might make a python binding. We will see.

Piso said...

Great job Davis! Now if only you could make this detect objects of multiple classes in a single pass, that would be the definitive version of MMOD+CNN :)

Davis King said...

Yep, that's the next thing I'm doing :)

Manohar Oruganti said...

Hey Davis,
Great work on the vehicle detection. Would appreciate it if you could provide a sample code for vehicle detection in python.

Павел Павел said...

Greetings Mr. King,

Thanks a lot for your work and making it available to others.
I have some questions about your CNN implementations (I'm new to CNN, so probably some silly ones):

How is that possible for your CNN to take an arbitrary sized input image for processing, where is, AFAIK for example for AlexNet the input resolution should be fixed ?
I'm not quite understand how output layer returns coordinates of a detection ?

Could you please point me to code/docs/articles to start digging this topics ?
Regards,
Pavel.

Davis King said...

Nothing about CNNs requires a fixed sized input so long as there aren't any layers that are "fully connected layers" in them. Anything you read about CNNs should make this clear.

As for how to get the output coordinates. I'm not sure I can explain it any more simply than what I've already said. You can see the output image from the CNN in the blog post. That image plainly contains bright spots. Those bright spots are where the cars are located.

Павел said...

Thanks a lot for your response. My thoughts on varying image sizes are more or less summarized in first answer here: https://stats.stackexchange.com/questions/188165/lenet-limitation-on-input-size/188166

That answer states that : "...However, often it is easy to adjust the first layer to make the network (in principle) work with different sized input..." with no further explanation.

I'm a bit stuck at this point, hence my question.

Regards,
Pavel.

Ali MOUIZINA said...

Great job as usual. I was wondering how could I enable ARM NEON support for HOG based detectors ?

Davis King said...

Thanks. NEON support is just one of the cmake options you can toggle on or off when you compile.

Davis King said...

Oops, that's not right. It's not a cmake toggle. For that you put -mfpu=neon as compiler flag, just like you normally would, and dlib will automatically use neon instructions.

初級畫手 said...
This comment has been removed by the author.
初級畫手 said...
This comment has been removed by the author.
Terence Liu said...

I understand the aspect ratio point for creating a pyramid, but how does this apply to the cnn face detector? I assume both use image pyramid but you only illustrated this fully in the (later) car detection example?

And because of the different aspect ratios, you would have to use a two-step process, one detector and the other shape predictor (https://github.com/davisking/dlib/blob/master/examples/dnn_mmod_find_cars_ex.cpp). But I don't see the shape predictor in the face detection example (https://github.com/davisking/dlib/blob/master/examples/dnn_mmod_face_detection_ex.cpp). The net definitions are exactly the same, how?

Davis King said...

None of these things need a shape_predictor. That's only in the example because it makes the bounding boxes look a little nicer and is a nice easy technique to know about. It has nothing to do with different aspect ratios or anything like that. The pyramid also doesn't have anything to do with the box aspect ratios.

All these networks do is output these detection strength maps. This is true of the face detection CNN, the dog head one, and the cars one. There is a detection strength map for every aspect ratio and whichever is hottest decides the aspect ratio. The face detector only outputs one detection strength map because all it's boxes are square (and it was made before the code supported multiple aspect ratios anyway). The cars detector outputs multiple maps because there are multiple car aspect ratios.

Terence Liu said...

Got it. Thanks!