Welcome to Hume AI
EVI 2 is now available! Visit platform.hume.ai to chat with Hume’s new voice-language foundation model and craft a custom empathic voice for your application.
Hume AI builds AI models that enable technology to communicate with empathy and learn to make people happy.
So much of human communication—in-person, text, audio, or video—is shaped by emotional expression. These cues allow us to attend to each other’s well-being. Our platform provides the APIs needed to ensure that technology, too, is guided by empathy and the pursuit of human well-being.
Empathic Voice Interface
Hume’s Empathic Voice Interface (EVI) is the world’s first emotionally intelligent voice AI. It is the only API that measures nuanced vocal modulations and responds to them using an empathic large language model (eLLM), which guides language and speech generation. Trained on millions of human interactions, our eLLM unites language modeling and text-to-speech with better EQ, prosody, end-of-turn detection, interruptibility, and alignment.
Expression Measurement
Hume’s state-of-the-art expression measurement models for the voice, face, and language are built on 10+ years of research and advances in semantic space theory pioneered by Alan Cowen. Our expression measurement models are able to capture hundreds of dimensions of human expression in audio, video, and images.
API Reference
Our API reference provides detailed descriptions of our REST and WebSocket endpoints. Explore request and response formats, usage examples, and everything you need to integrate Hume APIs.
API that measures nuanced vocal modulations and responds to them using an empathic large language model
Analyze facial, vocal, and linguistic expressions across 48+ dimensions to unlock deeper emotional insights
SDKs
Jumpstart your development with SDKs built for Hume APIs. They handle authentication, requests, and workflows to make integration straightforward. With support for React, TypeScript, and Python, our SDKs provide the tools you need to build efficiently across different environments.
Integrate Hume’s Empathic Voice Interface into React apps with tools for audio recording, playback, and API interaction
Work with Hume’s APIs using type-safe utilities and API wrappers for TypeScript and JavaScript
Access Hume’s APIs in Python with async/sync clients, error handling, and streaming tools
Example Code
Explore step-by-step guides and sample projects for integrating Hume APIs. Our GitHub repositories include ready-to-use code and open-source SDKs to support your development process in various environments.
Browse sample code and projects designed to help you integrate Hume APIs
Explore all of Hume’s open-source SDKs, examples, and public-facing code
Get Support
Need help? Our team is here to support you with any questions or challenges.