Over the last few months I've spent a lot of time studying optimal control and reinforcement learning. Aside from reading, one of the best ways to learn about something is to do it yourself, which in this case means a lot of playing around with the well known algorithms, and for those I really like, including them into dlib, which is the subject of this post. So far I've added two methods, the first, added in a previous dlib release was the well known least squares policy iteration reinforcement learning algorithm. The second, and my favorite so far due to its practicality, is a tool for solving model predictive control problems.
There is a dlib example program that explains the new model predictive control tool in detail. But the basic idea is that it takes as input a simple linear equation defining how some process evolves in time and then tells you what control input you should apply to make the process go into some user specified state. For example, imagine you have an air vehicle with a rocket on it and you want it to hover at some specific location in the air. You could use a model predictive controller to find out what direction to fire the rocket at each moment to get the desired outcome. In fact, the dlib example program is just that. It produces the following visualization where the vehicle is the black dot and you want it to hover at the green location. The rocket thrust is shown as the red line:
Another fun new tool in dlib is the perspective_window. It's a super easy to use tool for visualizing 3D point cloud data. For instance, the included example program shows how to make this:
Finally, Patrick Snape contributed Python bindings for dlib's video tracker, so now you can use it from Python. To try out these new tools download the newest dlib release.