TypeScript quickstart

Submit batch jobs and stream predictions in real time using Hume's TypeScript SDK.

This guide walks you through using Hume’s Expression Measurement API with the TypeScript SDK. You will submit a batch job to analyze a media file and then connect to the streaming API for real-time predictions.

Setup

Install the SDK

$bun add hume

Set your API key

Get your API key from the Hume AI platform and set it as an environment variable.

$export HUME_API_KEY=your_api_key_here

Create the client

1import { HumeClient } from "hume";
2
3const client = new HumeClient({ apiKey: process.env.HUME_API_KEY });

Batch API

The Batch API lets you submit files for processing and retrieve predictions when the job completes. This is best for analyzing recordings, datasets, and other pre-recorded content.

Submit a job and wait for completion

Start a job by specifying the models you want to run and the URLs of the files to process. The SDK provides a built-in awaitCompletion method that polls the job status for you.

You can also upload local files instead of URLs. See the API reference for the local file upload endpoint.

1const job = await client.expressionMeasurement.batch.startInferenceJob({
2 urls: ["https://hume-tutorials.s3.amazonaws.com/faces.zip"],
3 models: {
4 prosody: { granularity: "utterance" },
5 },
6});
7
8console.log(`Job ID: ${job.jobId}`);
9
10// Wait for the job to complete (default timeout: 300 seconds)
11await job.awaitCompletion();

For production use, consider passing a callbackUrl when submitting the job. Hume will send a POST request to your URL when the job completes. The webhook payload includes the jobId, status, and predictions.

Retrieve predictions

Once the job completes, retrieve and print the predictions.

1const predictions = await client.expressionMeasurement.batch.getJobPredictions(
2 job.jobId
3);
4
5for (const result of predictions) {
6 for (const filePrediction of result.results.predictions) {
7 for (const group of filePrediction.models.prosody.groupedPredictions) {
8 for (const prediction of group.predictions) {
9 console.log(`\nText: ${prediction.text}`);
10
11 const topEmotions = [...prediction.emotions]
12 .sort((a, b) => b.score - a.score)
13 .slice(0, 3);
14
15 for (const emotion of topEmotions) {
16 console.log(` ${emotion.name}: ${emotion.score.toFixed(3)}`);
17 }
18 }
19 }
20 }
21}

Predictions are also available as CSV files. Use the Get job artifacts endpoint to download a zip archive containing one CSV per model.

Streaming API

The Streaming API provides real-time predictions over a WebSocket connection. This is best for live audio, video, and interactive applications.

Connect and send data

Open a streaming connection and send text for analysis.

1const socket = client.expressionMeasurement.stream.connect({
2 config: {
3 language: { granularity: "sentence" },
4 },
5});
6
7const result = await socket.sendText({
8 text: "I am so excited to try this out!",
9});
10
11for (const prediction of result.language.predictions) {
12 console.log(`Text: ${prediction.text}`);
13
14 const topEmotions = [...prediction.emotions]
15 .sort((a, b) => b.score - a.score)
16 .slice(0, 3);
17
18 for (const emotion of topEmotions) {
19 console.log(` ${emotion.name}: ${emotion.score.toFixed(3)}`);
20 }
21}
22
23socket.close();

Stream a file

You can also send audio or video files through the streaming connection.

1import { createReadStream } from "fs";
2
3const socket = client.expressionMeasurement.stream.connect({
4 config: {
5 prosody: {},
6 },
7});
8
9const result = await socket.sendFile({
10 file: createReadStream("sample.mp3"),
11});
12
13for (const prediction of result.prosody.predictions) {
14 const topEmotions = [...prediction.emotions]
15 .sort((a, b) => b.score - a.score)
16 .slice(0, 3);
17
18 for (const emotion of topEmotions) {
19 console.log(` ${emotion.name}: ${emotion.score.toFixed(3)}`);
20 }
21}
22
23socket.close();