Batch API

The Batch API provides access to Hume models through an asynchronous job-based interface.
You can submit a job to have many different files processed in parallel.

Explore the Batch API

API reference page

Providing URLs and Files

You can provide Hume data to supply your job call with hosted files by URLs or local files.

API Limits

  • The size of any individual file provided either by url or a local file can not exceed 1 GB.
  • Each request has an upper limit of 100 URLs and 100 Local Media Files.
    • Can be a mix of the media files or archives (.zip, .tar.gz, .tar.bz2, .tar.xz).

Compressing data

You may compress/zip your data :

zip -r data_folder -x ".*" -x "__MACOSX"
zip -r data_folder -x ".*"
tar.exe -c -f -a data_folder
Compress-Archive data_folder


In this tutorial, the data is publicly available to download. For added security, you may choose to create a signed url through your preferred cloud storage provider.

Checking Job Status


Use webhooks to asynchronously receive notifications once the job completes. It is not recommended to poll the API periodically for job status.

There are several ways to get notified and check the status of your job.

  1. The status of a job can then be checked with the Get Job Details
  2. Provide a callback url. By providing a url we will send a POST request to you when the job is complete.
    { "callback_url": "<YOUR CALLBACK URL>" }
    The callback request body will look like this.
  job_id: "Job ID", 
  predictions: [ARRAY OF RESULTS]

Retrieve the predictions

Your predictions are available in a few formats.

To get JSON Get Job Predictions

curl --request GET \
     --url<JOB_ID>/predictions \
     --header 'X-Hume-Api-Key: <YOUR_API_KEY>' \
     --header 'accept: application/json; charset=utf-8'

To get a compressed file of CSVs, one per model Get Job Artifacts

curl --request GET \
     --url<JOB_ID>/artifacts \
     --header 'X-Hume-Api-Key: <YOUR_API_KEY>' \
     --header 'accept: application/octet-stream'