Batch API

The Batch API provides access to Hume models through an asynchronous job-based interface.
You can submit a job to have many different files processed in parallel.

Explore the Batch API

API reference page

Providing URLs and Files

You can provide data to supply your job call in one of the following formats: hosted file URLs, local files, or raw text presented as a list of strings.

API Limits

  • The size of any individual file provided by url cannot exceed 1 GB.
  • The size of any individual local file cannot exceed 100 MB.
  • Each request has an upper limit of 100 URLs, 100 strings (raw text), and 100 Local Media Files.
    • Can be a mix of the media files or archives (.zip, .tar.gz, .tar.bz2, .tar.xz).
  • For Audio and Video Files the max length supported is 1 Hour.

Compressing data

You may compress/zip your data :

zip -r data.zip data_folder -x ".*" -x "__MACOSX"
zip -r data.zip data_folder -x ".*"
tar.exe -c -f -a data.zip data_folder
Compress-Archive data_folder data.zip

Security

In this tutorial, the data is publicly available to download. For added security, you may choose to create a signed url through your preferred cloud storage provider.

Checking Job Status

🚧

Use webhooks to asynchronously receive notifications once the job completes. It is not recommended to poll the API periodically for job status.

There are several ways to get notified and check the status of your job.

  1. The status of a job can then be checked with the Get Job Details
  2. Provide a callback url. By providing a url we will send a POST request to you when the job is complete.
    { "callback_url": "<YOUR CALLBACK URL>" }
    The callback request body will look like this.
{
  job_id: "Job ID", 
  status: "STATUS (COMPLETED/FAILED)", 
  predictions: [ARRAY OF RESULTS]
}

Retrieve the predictions

Your predictions are available in a few formats.

To get JSON Get Job Predictions

job.get_predictions()
or
job.download_predictions("filename.json")
curl --request GET \
     --url https://api.hume.ai/v0/batch/jobs/<JOB_ID>/predictions \
     --header 'X-Hume-Api-Key: <YOUR_API_KEY>' \
     --header 'accept: application/json; charset=utf-8'

To get a compressed file of CSVs, one per model Get Job Artifacts

job.download_artifacts("filename.zip")
curl --request GET \
     --url https://api.hume.ai/v0/batch/jobs/<JOB_ID>/artifacts \
     --header 'X-Hume-Api-Key: <YOUR_API_KEY>' \
     --header 'accept: application/octet-stream'