Daito Manabe and Zach Lieberman are two biometrics designers in the midst of developing face-tracking technology that can analyze your facial expressions and follow them through space and time.  Their system uses a video camera and projector and the results. What you are about to see are some of their tests using an AAM algorithm and a Kinect™.  They state their goal as such:

Non-rigid face alignment and tracking is a common problem in computer vision. It is the front-end to many algorithms that require registration, for example face and expression recognition. However, those working on algorithms for these higher level tasks are often unfamiliar with the tools and peculiarities regarding non-rigid registration (i.e. pure machine learning scientists, psychologists, etc.). Even those directly working on face alignment and tracking often find implementing an algorithm from published work to be a daunting task, not least because baseline code against which performance claims can be assessed is does not exist. As such, the goal of FaceTracker™ is to provide source code and pre-trained models that can be used out-of-the-box, for the dual purpose of: 1) Promoting the advancement of higher level inference algorithms that require registration; and 2) Providing baseline code to promote quantitative improvements in face registration.

To learn more be sure to visit them at FaceTrackerand PKMital. I’ve embedded two videos below, but to see more of them be sure to visit Daito.ws.


Source: NOTCOT

Share.

Writer, editor, and founder of FEELguide. I have written over 5,000 articles covering many topics including: travel, design, movies, music, politics, psychology, neuroscience, business, religion and spirituality, philosophy, pop culture, the universe, and so much more. I also work as an illustrator and set designer in the movie industry, and you can see all of my drawings at http://www.unifiedfeel.com.

Comments are closed.

Exit mobile version