Batch API

The Batch API provides access to Hume models through an asynchronous job-based interface.
You can submit a job to have many different files processed in parallel.

Starting a job

Explore our API reference page

Providing URLs and Files

You can provide Hume data to supply your job call with hosted files by URLs or local files.

API Limits

  • The size of any individual file provided either by url or a local file can not exceed 1 GB.
  • Each request has an upper limit of 100 URLs and 100 Local Media Files.
    • Can be a mix of the media files or archives (.zip, .tar.gz, .tar.bz2, .tar.xz).

Compressing data

You may compress/zip your data :

zip -r data_folder -x ".*" -x "__MACOSX"
zip -r data_folder -x ".*"
tar.exe -c -f -a data_folder
Compress-Archive data_folder


In this tutorial, the data is publicly available to download. For added security, you may choose to create a signed URL through your preferred cloud storage provider.

Checking Job Status

There are several ways to get notified and check the status of your job.

  1. The status of a job can then be checked with the job ID.
  2. Provide a callback url. By providing a url we will send a POST request to you when the job is complete.
    { "callback_url": "<YOUR CALLBACK URL>" }
    The callback request body will look like this.
  job_id: "Job ID", 
  predictions: [ARRAY OF RESULTS]