Expression Measurement API

Real-time measurement streaming

WebSocket-based streaming facilitates continuous data flow between your application and Hume’s models, providing immediate feedback and insights.

Key features

  • Real-time data processing: Leveraging WebSockets, this API allows for the streaming of data to Hume’s models, enabling instant analysis and response. This feature is particularly beneficial for applications requiring immediate processing, such as live interaction systems or real-time monitoring tools.
  • Persistent, two-way communication: Unlike traditional request-response models, the WebSocket-based streaming maintains an open connection for two-way communication between the client and server. This facilitates an ongoing exchange of data, allowing for a more interactive and responsive user experience.
  • High throughput and low latency: The API is optimized for high performance, supporting high-volume data streaming with minimal delay. This ensures that applications can handle large streams of data efficiently, without sacrificing speed or responsiveness.

Applications and use cases

WebSockets are ideal for a wide range of applications that benefit from real-time data analysis and interaction. Examples include:

  • Live customer service tools: enhance customer support with real-time sentiment analysis and automated, emotionally intelligent responses
  • Interactive educational platforms: provide immediate feedback and adaptive learning experiences based on real-time student input
  • Health and wellness apps: support live mental health and wellness monitoring, offering instant therapeutic feedback or alerts based on the user’s vocal or textual expressions
  • Entertainment and gaming: create more immersive and interactive experiences by responding to user inputs and emotions in real time

Getting started with WebSocket streaming

Integrating WebSocket-based streaming into your application involves establishing a WebSocket connection with Hume AI’s servers and streaming data directly to the models for processing.

Streaming is built for analysis of audio, video, and text streams. By connecting to WebSocket endpoints you can get near real-time feedback on the expressive and emotional content of your data.

Install the Hume Python SDK

Make sure to enable the optional stream feature when installing the Hume Python SDK.

$pip install "hume[stream]"

Emotional language from text

This example uses our Emotional Language model to perform sentiment analysis on a children’s nursery rhyme.

If you haven’t already, grab your API key.

Hume Python SDK
1import asyncio
2from hume import HumeStreamClient
3from hume.models.config import LanguageConfig
5samples = [
6 "Mary had a little lamb,",
7 "Its fleece was white as snow."
8 "Everywhere the child went,"
9 "The little lamb was sure to go."
12async def main():
13 client = HumeStreamClient("<YOUR API KEY>")
14 config = LanguageConfig()
15 async with client.connect([config]) as socket:
16 for sample in samples:
17 result = await socket.send_text(sample)
18 emotions = result["language"]["predictions"][0]["emotions"]
19 print(emotions)

Your result should look something like this:

Sample Result
2 {'name': 'Admiration', 'score': 0.06379243731498718},
3 {'name': 'Adoration', 'score': 0.07222934812307358},
4 {'name': 'Aesthetic Appreciation', 'score': 0.02808445133268833},
5 {'name': 'Amusement', 'score': 0.027589013800024986},
6 ......
7 {'name': 'Surprise (positive)', 'score': 0.030542362481355667},
8 {'name': 'Sympathy', 'score': 0.03246130049228668},
9 {'name': 'Tiredness', 'score': 0.03606246039271355},
10 {'name': 'Triumph', 'score': 0.01235896535217762}

Facial expressions from an image

This example uses our Facial Expression model to get expression measurements from an image.

Hume Python SDK
1import asyncio
3from hume import HumeStreamClient, StreamSocket
4from hume.models.config import FaceConfig
6async def main():
7client = HumeStreamClient("<YOUR API KEY>")
8config = FaceConfig(identify_faces=True)
9async with client.connect([config]) as socket:
10result = await socket.send_file("<YOUR IMAGE FILEPATH>")

Speech prosody from an audio or video file

This example uses our Speech Prosody model to get expression measurements from an audio or video file.

Hume Python SDK
1import asyncio
3from hume import HumeStreamClient, StreamSocket
4from hume.models.config import ProsodyConfig
6async def main():
7 client = HumeStreamClient("<YOUR API KEY>")
8 config = ProsodyConfig()
9 async with client.connect([config]) as socket:
10 result = await socket.send_file("<YOUR VIDEO OR AUDIO FILEPATH>")
11 print(result)

Streaming with your own WebSockets client

To call the API from your own WebSockets client you’ll need the API endpoint, a JSON message, and an API key header/param. More information can be found in the Expression Measurement API reference.

To get started, you can use a WebSocket client of your choice to connect to the models endpoint:

WebSocket URI
url wss://

Make sure you configure the socket connection headers with your personal API key

1X-Hume-Api-Key: <YOUR API KEY>

The default WebSockets implementation in your browser may not have support for headers. If that’s the case you can set the apiKey query parameter.

And finally, send the following JSON message on the socket:

JSON Message
2 "models": {
3 "language": {}
4 },
5 "raw_text": true,
6 "data": "Mary had a little lamb"

You should receive a JSON response that looks something like this:

JSON Response
2 "language": {
3 "predictions": [
4 {
5 "text": "Mary",
6 "position": { "begin": 0, "end": 4 },
7 "emotions": [
8 { "name": "Anger", "score": 0.012025930918753147 },
9 { "name": "Joy", "score": 0.056471485644578934 },
10 { "name": "Sadness", "score": 0.031556881964206696 },
11 ]
12 },
13 {
14 "text": "had",
15 "position": { "begin": 5, "end": 8 },
16 "emotions": [
17 { "name": "Anger", "score": 0.0016927534015849233 },
18 { "name": "Joy", "score": 0.02388327568769455 },
19 { "name": "Sadness", "score": 0.018137391656637192 },
20 ...
21 ]
22 },
23 ...
24 ]
25 }

Sending images or audio

The WebSocket endpoints of the Expression Measurement API require that you encode your media using base64. Here’s a quick example of base64 encoding data in Python:

Base64 encoding
1import base64
2from pathlib import Path
4def encode_data(filepath: Path) -> str:
5with Path(filepath).open('rb') as fp:
6bytes_data = base64.b64encode(
7encoded_data = bytes_data.decode("utf-8")
8return encoded_data
10filepath = "<path-to-your-media>"
11encoded_data = encode_data(filepath)