EVI Python Quickstart
A quickstart guide for integrating the Empathic Voice Interface (EVI) with Python.
In this guide, you’ll learn how to integrate EVI into your Python applications using Hume’s Python SDK.
- Environment setup: Download package and system dependencies to run EVI.
- Import statements and helpers: Import needed symbols and define helper functions.
- Authentication: Use your API credentials to authenticate your EVI application.
- Connection: Set up a secure WebSocket connection to interact with EVI.
- Handling incoming messages: Process messages and queue audio for playback.
- Audio input: Capture audio data from an input device and send to EVI.
See the complete implementation of this guide on on GitHub
Explore or contribute to Hume’s Python SDK on GitHub
Hume’s Python SDK supports EVI using Python versions 3.9
, 3.10
, and 3.11
on macOS and Linux platforms. The
full specification be found in the Python SDK’s readme.
Environment setup
This guide uses the Hume SDK Python package hume
with the [microphone]
package extra. You can install this with uv
(recommended), poetry
, or pip
. It also uses the python-dotenv
package for loading environment variables from an .env
file.
uv
poetry
pip
System dependencies
The Hume Python SDK uses the sounddevice library for audio recording and playback, which relies on the PortAudio C Library to be installed on your system. On macOS and Windows, PortAudio is typically included with the sounddevice
package, so no additional installation is required. However, on Linux, you will need to manually install PortAudio
correctly for your distribution.
Import statements and helpers
First, we import needed symbols from the Python standard library and the Hume SDK, and define some helpers that are useful for printing readable output to the terminal.
Authentication
Log into your Hume AI Account and obtain an API key. Store it as HUME_API_KEY
inside your project’s .env
file.
Read HUME_API_KEY
and use it to instantiate the AsyncHumeClient
class. This is the main entry point provided by the Hume Python SDK.
You can specify EVI’s voice and behavior for a chat by Creating a Configuration through the API or the Hume platform web interface. Set HUME_CONFIG_ID
in .env
or as an environment variable and read it.
Connection
To connect to an EVI chat, use the client.empathic_voice.chat.connect_with_callbacks
method provided in the AsyncHumeClient
. When connecting to the chat, you specify the EVI config inside the ChatConnectOptions
object. EVI chats are event-based, so you specify on_open
, on_message
, on_close
, and on_error
callback functions to define what your application will do in response to the events that occur during the chat.
Handling incoming messages
After you successfully connect to an EVI chat, messages will be passed to your on_message
handler. These are described by the Hume SDK’s SubscribeEvent
type.
Audio segments for playback arrive on messages of the audio_output
type. The Hume SDK provides a Stream
type that is suitable for queuing audio segments for playback. You should instantiate a single Stream
instance to act as your playback queue.
Audio input
The Hume SDK provides a MicrophoneInterface
class that handles both
- Sending recorded audio through the WebSocket to EVI
- Playing back queued audio from a
byte_stream
of typeStream
that you initialize it with.
Pass the chat socket provided by the connect_with_callbacks
method in order to use the MicrophoneInterface.start
:
Specify a microphone device
MicrophoneInterface.start
will attempt to use the system’s default audio input device. To specify a specific audio input device, you can pass it via the optional device
parameter in MicrophoneInterface.start
.
To view a list of available audio devices, run the following command:
If the MacBook Pro Microphone
is the desired device, specify device 4 in the Microphone context. For example:
For troubleshooting faulty device detection - particularly with systems using ALSA, the Advanced Linux Sound
Architecture, the device may also be directly specified using the sounddevice
library:
Interruption
The allow_interrupt
parameter in the MicrophoneInterface
class allows control over whether the user can send a
message while the assistant is speaking:
allow_interrupt=True
: Allows the user to send microphone input even when the assistant is speaking. This enables more fluid, overlapping conversation.allow_interrupt=False
: Prevents the user from sending microphone input while the assistant is speaking, ensuring that the user does not interrupt the assistant. This is useful in scenarios where clear, uninterrupted communication is important.
Put it all together
Finally, add the following code at the end of your script to run the main function:
View the complete quickstart.py
code on GitHub
Next steps
Congratulations! You’ve successfully implemented a real-time conversational application using Hume’s Empathic Voice Interface (EVI).
Next, consider exploring these areas to enhance your EVI application:
See detailed instructions on how you can customize EVI for your application needs.
Learn how you can access and manage conversation transcripts and expression measures.
For further details and practical examples, explore the API Reference and our Hume API Examples on GitHub.