Daito Manabe and Zach Lieberman are two biometrics designers in the midst of developing face-tracking technology that can analyze your facial expressions and follow them through space and time. Their system uses a video camera and projector and the results. What you are about to see are some of their tests using an AAM algorithm and a Kinect™. They state their goal as such:
Non-rigid face alignment and tracking is a common problem in computer vision. It is the front-end to many algorithms that require registration, for example face and expression recognition. However, those working on algorithms for these higher level tasks are often unfamiliar with the tools and peculiarities regarding non-rigid registration (i.e. pure machine learning scientists, psychologists, etc.). Even those directly working on face alignment and tracking often find implementing an algorithm from published work to be a daunting task, not least because baseline code against which performance claims can be assessed is does not exist. As such, the goal of FaceTracker™ is to provide source code and pre-trained models that can be used out-of-the-box, for the dual purpose of: 1) Promoting the advancement of higher level inference algorithms that require registration; and 2) Providing baseline code to promote quantitative improvements in face registration.