University of Pittsburgh researchers have developed PyAFAR: Python-based Automated Facial Action Recognition library for use in infants and adults.
Description
Facial action recognition, using face and head movements, affect, and expression, can provide valuable insight into a person’s emotional state. It has a variety of uses in healthcare, education, and entertainment. PyAFAR is a novel validated, automated, easy to use tool to detect facial actions and expressions in both adults and infants allowing for input of videos to be rapidly analyzed.
Applications
• Face tracking
• Face registration
• Facial action unit detection
• Facial visualization
Advantages
Current tools to detect facial action units and expression, face and head dynamics, and affect detection are available but with many shortcomings. Commercial tools are often costly with unknown validity due to proprietary information constraints. Open-source tools are often not user friendly, with user interfaces designed with programmers in mind rather than non-programmers. Both commercial and open-source tools typically lack evidence of domain transfer and options for retraining for use in new domains.
This novel tool has been designed with the user in mind. Through application of Python to previously developed automated facial action recognition software, the previous model has been retrained improving performance.
Invention Readiness
PyAFAR has been developed in Python using other open-source libraries and is compatible with Windows, Linux, and MacOS operating systems. Convolutional neural networks were trained with the BP4D+ 3D video database of spontaneous facial expressions in adults aged 18–66 years of various ethnicities and Miami and CLOCK databases for infants. PyAFAR has three major components: face detection and individualized tracking, face normalization and identification, and action unit (AU) detection. Face detection allows for tracking of multiple persons, including those who may leave and re-enter the video. Face normalization removes background to improve AU detection. Intensity detection and AU occurrence are enabled for 12 AUs in adults, and similarly in 9 AUs in infants which are associated with positive and negative affect. Output is designed to be exported directly into user code or CSV and JSON files. This model can be optimized and trained further using new datasets from the user if required.
IP Status
Software