Processing batches of media files
Hume’s Expression Measurement API is designed to facilitate large-scale processing of files using Hume’s advanced models through an asynchronous, job-based interface. This API allows developers to submit jobs for parallel processing of various files, enabling efficient handling of multiple data points simultaneously, and receiving notifications when results are available.
Key features
-
Asynchronous job submission: Jobs can be submitted to process a wide array of files in parallel, making it ideal for applications that require the analysis of large volumes of data.
-
Flexible data input options: The API supports multiple data formats, including hosted file URLs, local files directly from your system, and raw text in the form of a list of strings. This versatility ensures that you can easily integrate the API into their applications, regardless of where their data resides.
Applications and use cases
Hume’s Expression Measurement API is particularly useful for leveraging Hume’s expressive models across a broad spectrum of files and formats. Whether it’s for processing large datasets for research, analyzing customer feedback across multiple channels, or enriching user experiences in media-rich applications, REST provides a robust solution for asynchronously handling complex, data-intensive tasks.
Using Hume’s Expression Measurement API
Here we’ll show you how to upload your own files and run Hume models on batches of data. If you haven’t already, grab your API Key.
Making a request to the API
Start a new job with the Expression Measurement API.
To do the same with a local file:
Sample files for you to use in this tutorial are available here: Download faces.zip Download david_hume.jpeg
Checking job status
Use webhooks to asynchronously receive notifications once the job completes. It is not recommended to poll the API periodically for job status.
There are several ways to get notified and check the status of your job.
- Using the Get job details API endpoint.
- Providing a callback URL. We will send a POST request to your URL when the job is complete. Your request body should look like this:
{ "callback_url": "<YOUR CALLBACK URL>" }
Retrieving predictions
Your predictions are available in a few formats.
To get predictions as JSON use the Get job predictions endpoint.
To get predictions as a compressed file of CSVs, one per model use the Get job artifacts endpoint.
API limits
- The size of any individual file provided by URL cannot exceed
1 GB
. - The size of any individual local file cannot exceed
100 MB
. - Each request has an upper limit of 100 URLs, 100 strings (raw text), and 100 local media files. Can be a mix of the media files or archives (.zip, .tar.gz, .tar.bz2, .tar.xz).
- For audio and video files the max length supported is 3 hours.
- The limit for each individual text string for the Expression Measurement API is
255 MB
. - The limit to the number of jobs that can be queued at a time is
500
.
Providing URLs and files
You can provide data for your job in one of the following formats: hosted file URLs, local files, or raw text presented as a list of strings.
In this tutorial, the data is publicly available to download. For added security, you may choose to create a signed URL through your preferred cloud storage provider.