Introduction

Welcome to Hume AI

Hume AI builds AI models that enable technology to communicate with empathy and learn to make people happy.

So much of human communication—in-person, text, audio, or video—is shaped by emotional expression. These cues allow us to attend to each other’s well-being. Our platform provides the APIs needed to ensure that technology, too, is guided by empathy and the pursuit of human well-being.

Empathic Voice Interface

Hume’s Empathic Voice Interface (EVI) is the world’s first emotionally intelligent voice AI. It is the only API that measures nuanced vocal modulations and responds to them using an empathic large language model (eLLM), which guides language and speech generation. Trained on millions of human interactions, our eLLM unites language modeling and text-to-speech with better EQ, prosody, end-of-turn detection, interruptibility, and alignment.

Expression Measurement

Hume’s state-of-the-art expression measurement models for the voice, face, and language are built on 10+ years of research and advances in semantic space theory pioneered by Alan Cowen. Our expression measurement models are able to capture hundreds of dimensions of human expression in audio, video, and images.

API Reference

Alongside our documentation, we provide a detailed API reference to help you integrate and use our products. It includes descriptions of all our REST and WebSocket endpoints, as well as request and response formats and usage examples.

Example Code

Explore our step-by-step guides for integrating Hume APIs. Our GitHub repositories include straightforward projects to help you get started quickly, with code snippets for specific functionalities. Additionally, you’ll find open-source SDKs for popular languages and frameworks to support your development process across various environments.

Get Support

If you have questions or run into challenges, we’re here to help!