Batch API
The Batch API provides access to Hume models through an asynchronous job-based interface.
You can submit a job to have many different files processed in parallel.
Starting a job
Explore our API reference page
Providing URLs and Files
You can provide Hume data to supply your job call with hosted files by URLs or local files.
API Limits
- The size of any individual file provided either by url or a local file can not exceed 1 GB.
- Each request has an upper limit of 100 URLs and 100 Local Media Files.
- Can be a mix of the media files or archives (.zip, .tar.gz, .tar.bz2, .tar.xz).
Compressing data
You may compress/zip your data :
zip -r data.zip data_folder -x ".*" -x "__MACOSX"
zip -r data.zip data_folder -x ".*"
tar.exe -c -f -a data.zip data_folder
Compress-Archive data_folder data.zip
Security
In this tutorial, the data is publicly available to download. For added security, you may choose to create a signed URL through your preferred cloud storage provider.
Checking Job Status
There are several ways to get notified and check the status of your job.
- The status of a job can then be checked with the job ID.
- Provide a callback url. By providing a url we will send a POST request to you when the job is complete.
{ "callback_url": "<YOUR CALLBACK URL>" }
The callback request body will look like this.
{
job_id: "Job ID",
status: "STATUS (COMPLETED/FAILED)",
predictions: [ARRAY OF RESULTS]
}
Updated 6 days ago