Expression Measurement API

Processing batches of media files

Hume’s Expression Measurement API is designed to facilitate large-scale processing of files using Hume’s advanced models through an asynchronous, job-based interface. This API allows developers to submit jobs for parallel processing of various files, enabling efficient handling of multiple data points simultaneously, and receiving notifications when results are available.

Key features

  • Asynchronous job submission: Jobs can be submitted to process a wide array of files in parallel, making it ideal for applications that require the analysis of large volumes of data.

  • Flexible data input options: The API supports multiple data formats, including hosted file URLs, local files directly from your system, and raw text in the form of a list of strings. This versatility ensures that you can easily integrate the API into their applications, regardless of where their data resides.

Applications and use cases

Hume’s Expression Measurement API is particularly useful for leveraging Hume’s expressive models across a broad spectrum of files and formats. Whether it’s for processing large datasets for research, analyzing customer feedback across multiple channels, or enriching user experiences in media-rich applications, REST provides a robust solution for asynchronously handling complex, data-intensive tasks.

Using Hume’s Expression Measurement API

Here we’ll show you how to upload your own files and run Hume models on batches of data. If you haven’t already, grab your API Key.

Making a request to the API

Start a new job with the Expression Measurement API.

$curl https://api.hume.ai/v0/batch/jobs \
> --request POST \
> --header "Content-Type: application/json" \
> --header "X-Hume-Api-Key: <YOUR API KEY>" \
> --data '{
> "models": {
> "face": {}
> },
> "urls": [
> "https://hume-tutorials.s3.amazonaws.com/faces.zip"
> ]
>}'

To do the same with a local file:

$curl https://api.hume.ai/v0/batch/jobs \
> --request POST \
> --header "Content-Type: multipart/form-data" \
> --header "X-Hume-Api-Key: <YOUR API KEY>" \
> --form json='{
> "models": {
> "face": {}
> }
> }' \
> --form file=@faces.zip \
> --form file=@david_hume.jpeg

Sample files for you to use in this tutorial are available here: Download faces.zip Download david_hume.jpeg

Checking job status

Use webhooks to asynchronously receive notifications once the job completes. It is not recommended to poll the API periodically for job status.

There are several ways to get notified and check the status of your job.

  1. Using the Get job details API endpoint.
  2. Providing a callback URL. We will send a POST request to your URL when the job is complete. Your request body should look like this: { "callback_url": "<YOUR CALLBACK URL>" }
JSON
1{
2 job_id: "Job ID",
3 status: "STATUS (COMPLETED/FAILED)",
4 predictions: [ARRAY OF RESULTS]
5}

Retrieving predictions

Your predictions are available in a few formats.

To get predictions as JSON use the Get job predictions endpoint.

$curl --request GET \
> --url https://api.hume.ai/v0/batch/jobs/<JOB_ID>/predictions \
> --header 'X-Hume-Api-Key: <YOUR API KEY>' \
> --header 'accept: application/json; charset=utf-8'

To get predictions as a compressed file of CSVs, one per model use the Get job artifacts endpoint.

$curl --request GET \
> --url https://api.hume.ai/v0/batch/jobs/<JOB_ID>/artifacts \
> --header 'X-Hume-Api-Key: <YOUR API KEY>' \
> --header 'accept: application/octet-stream'

API limits

  • The size of any individual file provided by URL cannot exceed 1 GB.

  • The size of any individual local file cannot exceed 100 MB.

  • Each request has an upper limit of 100 URLs, 100 strings (raw text), and 100 local media files.

    • Can be a mix of the media files or archives (.zip, .tar.gz, .tar.bz2, .tar.xz).
  • For audio and video files the max length supported is 1 Hour.

Providing URLs and files

You can provide data for your job in one of the following formats: hosted file URLs, local files, or raw text presented as a list of strings.

In this tutorial, the data is publicly available to download. For added security, you may choose to create a signed URL through your preferred cloud storage provider.