Python quickstart

Submit batch jobs and stream predictions in real time using Hume's Python SDK.

This guide walks you through using Hume’s Expression Measurement API with the Python SDK. You will submit a batch job to analyze a media file and then connect to the streaming API for real-time predictions.

Setup

Install the SDK

$uv add hume

Set your API key

Get your API key from the Hume AI platform and set it as an environment variable.

$export HUME_API_KEY=your_api_key_here

Create the client

1import os
2from hume import HumeClient
3
4client = HumeClient(api_key=os.getenv("HUME_API_KEY"))

Batch API

The Batch API lets you submit files for processing and retrieve predictions when the job completes. This is best for analyzing recordings, datasets, and other pre-recorded content.

Submit a job

Start a job by specifying the models you want to run and the URLs of the files to process.

1from hume.expression_measurement.batch.types import Models, Prosody
2
3job_id = client.expression_measurement.batch.start_inference_job(
4 urls=["https://hume-tutorials.s3.amazonaws.com/faces.zip"],
5 models=Models(
6 prosody=Prosody(granularity="utterance"),
7 ),
8)
9
10print(f"Job ID: {job_id}")

You can also upload local files instead of URLs. See the API reference for the local file upload endpoint.

Wait for the job to complete

Poll the job status until it reaches COMPLETED or FAILED.

1import time
2
3while True:
4 job_details = client.expression_measurement.batch.get_job_details(id=job_id)
5 status = job_details.state.status
6
7 if status == "COMPLETED":
8 print("Job completed.")
9 break
10 elif status == "FAILED":
11 print("Job failed.")
12 break
13
14 print(f"Status: {status}")
15 time.sleep(3)

For production use, consider passing a callback_url when submitting the job. Hume will send a POST request to your URL when the job completes, eliminating the need to poll. The webhook payload includes the job_id, status, and predictions.

Retrieve predictions

Once the job completes, retrieve and print the predictions.

1predictions = client.expression_measurement.batch.get_job_predictions(id=job_id)
2
3for result in predictions:
4 source = result.source
5 print(f"\nSource: {source.url or source.filename}")
6
7 for file_prediction in result.results.predictions:
8 for group in file_prediction.models.prosody.grouped_predictions:
9 for prediction in group.predictions:
10 print(f"\n Text: {prediction.text}")
11 top_emotions = sorted(prediction.emotions, key=lambda e: e.score, reverse=True)[:3]
12 for emotion in top_emotions:
13 print(f" {emotion.name}: {emotion.score:.3f}")

Predictions are also available as CSV files. Use the Get job artifacts endpoint to download a zip archive containing one CSV per model.

Streaming API

The Streaming API provides real-time predictions over a WebSocket connection. This is best for live audio, video, and interactive applications.

Connect and send data

Use the async client to open a streaming connection and send text for analysis.

1import asyncio
2from hume import AsyncHumeClient
3from hume.expression_measurement.stream import Config, StreamLanguage
4
5async def stream_text():
6 client = AsyncHumeClient(api_key=os.getenv("HUME_API_KEY"))
7
8 async with client.expression_measurement.stream.connect() as socket:
9 result = await socket.send_text(
10 text="I am so excited to try this out!",
11 config=Config(
12 language=StreamLanguage(granularity="sentence"),
13 ),
14 )
15
16 language_predictions = result.language.predictions
17 for prediction in language_predictions:
18 print(f"Text: {prediction.text}")
19 top_emotions = sorted(prediction.emotions, key=lambda e: e.score, reverse=True)[:3]
20 for emotion in top_emotions:
21 print(f" {emotion.name}: {emotion.score:.3f}")
22
23asyncio.run(stream_text())

Stream a file

You can also send audio or video files through the streaming connection.

1import asyncio
2from hume import AsyncHumeClient
3from hume.expression_measurement.stream import Config
4
5async def stream_file():
6 client = AsyncHumeClient(api_key=os.getenv("HUME_API_KEY"))
7
8 async with client.expression_measurement.stream.connect() as socket:
9 result = await socket.send_file(
10 "sample.mp3",
11 config=Config(prosody={}),
12 )
13
14 prosody_predictions = result.prosody.predictions
15 for prediction in prosody_predictions:
16 top_emotions = sorted(prediction.emotions, key=lambda e: e.score, reverse=True)[:3]
17 for emotion in top_emotions:
18 print(f" {emotion.name}: {emotion.score:.3f}")
19
20asyncio.run(stream_file())