Get Started with Hume's APIs


At Hume AI, we're building a platform for responsible expressive communication technology.
We currently provide two APIs that give access to our suite of models:

  1. Batch API
    Submit asynchronous jobs to process many files in parallel and receive notifications when results are available.
  2. Streaming API
    Get real-time inference on continuous data streams using WebSocket connections.

This quick start guide will help you get up and running with your first API calls.

Getting your API key

  1. Sign in to Hume
  2. Copy your API key by clicking the double box.



Your API key is a random sequence of letters and numbers.

It should look something like ntylOFypHLRXMmjlTxljoecAnMgB30JtOLZC2nph1TYErCvv

Your first API call

Let's start by trying out the facial expression model.
We'll use this picture of our friend, David Hume.

David Hume

David Hume

Starting a job

To run a job with the Hume Batch API, open a terminal window and run the following curl command: (make sure to replace with the actual API key you got in the steps above.

--request POST  
--header "Content-Type: application/json"  
--header "X-Hume-Api-Key: <YOUR-API-KEY>"  
--data '{  
    "urls": [  
    "models": {  
        "face": {}  

# Response
    "job_id": "6b5e7b4f21a247bd8247b91983f12d57"

Checking job status

Check the status of your job with another command: (replace both <YOUR-API-KEY> and <JOB-ID>)

curl --request GET \
     --url<JOB-ID> \
     --header 'X-Hume-Api-Key: <YOUR-API-KEY>' \
     --header 'accept: application/json; charset=utf-8'

# Response
    "job_id": "<JOB-ID>",
    "request": {...},
    "state": {
        "status": "COMPLETED",
    "user_id": "<USER-ID>"


If your job status is QUEUED or IN_PROGRESS you can wait a few seconds and try the last command again. Processing time may be slow based on job size and server load.

Your predictions

To retrieve your predictions: (replace both <YOUR-API-KEY> and <JOB-ID>)

curl --request GET \
     --url<JOB-ID>/predictions \
     --header 'X-Hume-Api-Key: <YOUR-API-KEY>' \
     --header 'accept: application/json; charset=utf-8'

You can see the bounding box where the face was detected as well as a high-dimensional emotion embedding for the detected face.

  "box": { "x": 94.045, "y": 38.421, "w": 66.237, "h": 86.245 },  
  "emotions": [  
    { "name": "Calmness", "score": 0.220 },  
    { "name": "Boredom", "score": 0.198 },  
    { "name": "Interest", "score": 0.185 }  
    # ... More emotions