Facial Expression

Facial expression is the most well-studied modality of expressive behavior, but the overwhelming focus has been on six discrete categories of facial movement that capture less than 30% of what typical facial expressions convey (and the scientifically useful, but outdated, Facial Action Coding System). Recent studies reveal over 28 distinct dimensions of facial expression (Cowen & Keltner, 2020; Cowen et al., 2021; Brooks et al., 2022).

Hume’s Facial Emotional Expression Model generates 48 outputs encompassing the 28+ dimensions of meaning that people distinguish in facial expression. These 48 outputs also encompass other, alternative conceptualizations for the sake of interpretation and alignment across our different models. As with every model, the labels for each dimension are proxies for how people tend to label the underlying patterns of behavior. They should not be treated as direct inferences of emotional experience.

Hume’s FACS 2.0 Model is a new generation automated facial action coding system (FACS). With 55 outputs encompassing 26 traditional actions units (AUs) and 29 other descriptive features (e.g., smile, scowl), FACS 2.0 is even more comprehensive than manual FACS annotations, and is even less biased by factors such as age.

Our facial expression models are packaged with face detection and work on both images and videos. Further details can be found in the API reference.

In addition to our image-based facial expression models, we also offer an anonymized Facemesh Model for applications in which it is essential to keep personally identifiable data on-device (e.g., for compliance with local laws). Instead of face images, our facemesh model processes facial landmarks detected using Google's MediaPipe library. It achieves about 80% accuracy relative to our image-based model.