The Hume AI Platform can be applied to almost any application that has videos, images, or text/language of people. We offer a spectrum of models of nonverbal behavior and psychometrics that are complementary to large language models, offering a new window into the half of human communication that isn’t explicitly represented in words. Our APIs are provided for purposes of (1) scientific research and (2) the development of applications that respond to human expressive behaviors in keeping with ethical guidelines and scientific best practices.
Currently, our models measure:
- Facial Expression, including subtle facial movements often seen as expressing love or admiration, awe, disappointment, or cringes of empathic pain, which span at least 28 distinct dimensions of meaning.
- Speech Prosody, or the non-linguistic tone, rhythm, and timbre of speech, which spans at least 18 distinct dimensions of meaning.
- Vocal Burst, including laughs, sighs, huhs, hmms, cries and shrieks (to name a few), which span at least 24 distinct dimensions of meaning.
- Emotional Language, or the emotional tone of transcribed text, along 53 dimensions.
Used responsibly, expression understanding is integral to a wide range of technologies capable of advancing the greater good. It can help us build healthier social networks. It can train digital assistants to respond with nuance to our present state of mind—to how we say something rather than simply what we say. It can inform technologies that let animators bring relatable characters to life, apps that work to improve mental health, and communication platforms that enhance our empathy for others. It can even be used to create entirely new experiences, from new kinds of art to personalized VR worlds, optimized for specific emotions.
Updated 3 months ago