EVI .NET Quickstart
A quickstart guide for integrating the Empathic Voice Interface (EVI) with .NET.
In this guide, you’ll learn how to use Hume’s .NET SDK to integrate with EVI.
Make sure that connecting to EVI from your .NET code is the right choice.
If your .NET app is a client app — a desktop application or CLI that runs on the user’s machine and captures audio directly from their microphone — then connecting to EVI from .NET is appropriate.
If your .NET app is a server app that will not run on the same machine to which the user’s microphone is connected, it is usually better to connect to EVI not from .NET code but directly from the client to keep latency low. If you need to control an EVI chat with logic that MUST live on your backend, and have your .NET backend use the Send Message endpoint or Control Plane WebSocket connection to control an EVI chat that was already opened from the client.
The example code in this guide sends EVI hardcoded audio from a file, as a placeholder. You should replace this with logic that sends audio sourced from your user’s microphone.
- Environment setup: Download package and system dependencies to run EVI.
- Import statements: Import needed symbols from the Hume SDK.
- Authentication: Use your API credentials to authenticate your EVI application.
- Connection: Set up a secure WebSocket connection to interact with EVI.
- Handling incoming messages: Subscribe to events and process messages from EVI.
- Audio input: Capture audio data from an input device and send to EVI.
See the complete implementation of this guide on GitHub
Explore or contribute to Hume’s .NET SDK on GitHub
Environment setup
Create a new .NET project and install the required packages:
dotnet CLI
Visual Studio
Download sample audio
Download the sample PCM audio file to use with this guide:
Import statements
First, import the needed namespaces from the .NET standard library and the Hume SDK.
Authentication
Log into your Hume AI Account and obtain an API key. Create a .env file in your project directory and store your API key:
Load the environment variables and use the API key to instantiate the HumeClient class. This is the main entry point provided by the Hume .NET SDK.
Connection
To connect to an EVI chat, create a ChatApi instance using the client.EmpathicVoice.CreateChatApi method. You can specify session settings in the ChatApi.Options object.
Connect to EVI and wait for the chat metadata to confirm the connection is established:
Handling incoming messages
EVI communicates through events. Subscribe to the events you want to handle before connecting. The main event types are:
AssistantMessage: Text messages from EVIUserMessage: Transcriptions of user speechAudioOutput: Audio data for playbackChatMetadata: Information about the chat session
Audio input
Before sending audio, configure the audio format by sending session settings. EVI expects audio in a specific format (e.g., 48kHz, 16-bit, mono PCM).
Sending audio data
Audio data should be sent as base64-encoded chunks. Here’s a helper function that reads a PCM file and streams it to EVI in real-time chunks:
Put it all together
Here’s the complete example that connects to EVI and transmits audio:
Running the example
dotnet CLI
Visual Studio
View the complete example code on GitHub.
Next steps
Next, consider exploring these areas to enhance your EVI application:
See detailed instructions on how you can customize EVI for your application needs.
Learn how you can access and manage conversation transcripts and expression measures.
For further details and practical examples, explore the API Reference and our Hume API Examples on GitHub.

