Thursday, August 28, 2014

Real-Time Face Pose Estimation

I just posted the next version of dlib, v18.10, and it includes a number of new minor features.  The main addition in this release is an implementation of an excellent paper from this year's Computer Vision and Pattern Recognition Conference:
One Millisecond Face Alignment with an Ensemble of Regression Trees by Vahid Kazemi and Josephine Sullivan
As the name suggests, it allows you to perform face pose estimation very quickly. In particular, this means that if you give it an image of someone's face it will add this kind of annotation:

In fact, this is the output of dlib's new face landmarking example program on one of the images from the HELEN dataset.  To get an even better idea of how well this pose estimator works take a look at this video where it has been applied to each frame:


It doesn't just stop there though.  You can use this technique to make your own custom pose estimation models.  To see how, take a look at the example program for training these pose estimation models.

11 comments :

Hamilton 漢密頓 said...

well done

Hamilton 漢密頓 said...

well done

Rodrigo Benenson said...

Have you evaluated this implementation quality and/or speed-wise ? How does it compare to the numbers reported in the original research paper ?

Davis King said...

Yes. The results are comparable to those reported in the paper both in terms of speed and accuracy.

Rodrigo Benenson said...

Sweet !

Stephen Moore said...

Does the "real time pose estimation algorithm" use a face detector every frame or use the previous frames output for current frame estimation?

Davis King said...

You can run it either way. The input to the pose estimator is a bounding box for a face and it outputs the pose.

The included example program shows how to get that bounding box from dlib's face detector but you could just as easily use the face pose from the previous frame to define the bounding box.

Amanda Sgroi said...

In the paper, "One Millisecond Face Alignment ..." they output 194 landmark points on the face, however the implementation provided in dlib only outputs 68 points. Is there a way to easily produce the 194 points using the code provided in dlib?

Davis King said...

I only included the 68 point style model used by the iBUG 300-W dataset in this dlib release. However, if you want to train a 194 point model you can do so pretty easily by following the example here: http://dlib.net/train_shape_predictor_ex.cpp.html

You can get the training data from the HELEN dataset webpage http://www.ifp.illinois.edu/~vuongle2/helen/.

drjo said...

I compiled the example from v18.10 and get an error, DLIB_JPEG_SUPPORT not #defined Unable to load the image in file ..\faces\2007_007763.jpg.

Can you please help me out?

Davis King said...

You need to tell your compiler to add a #define for DLIB_JPEG_SUPPORT and then link it with libjpeg.

If you are unsure how to configure your compiler to do this then I would suggest using CMake (following the directions http://dlib.net/compile.html). CMake will set all this stuff up for you.