EVI TypeScript Quickstart Guide
A quickstart guide for implementing the Empathic Voice Interface (EVI) with TypeScript.
This guide provides instructions for integrating EVI into your TypeScript projects. It includes detailed steps for using EVI with Next.js (App Router), Next.js (Pages Router), and a standalone setup without any framework.
Kickstart your project with our pre-configured Vercel template for the Empathic Voice Interface. Install with one click to instantly set up a ready-to-use project and start building with TypeScript right away!
Next.js (App Router)
Next.js (Pages Router)
No Framework
This tutorial utilizes Hume’s React SDK to interact with EVI. It includes detailed steps for both the App Router in Next.js and is broken down into four key components:
- Authentication: Generate and use an access token to authenticate with EVI.
- Setting up context provider: Set up the
<VoiceProvider/>
. - Starting a chat and display messages: Implement the functionality to start a chat with EVI and display messages.
- That’s it!: Audio playback and interruptions are handled for you.
The Hume React SDK abstracts much of the logic for managing the WebSocket connection, as well as capturing and preparing audio for processing. For a closer look at how the React package manages these aspects of the integration, we invite you to explore the source code here: @humeai/voice-react
To see this code fully implemented within a frontend web application using the App Router from Next.js, visit this GitHub repository: evi-nextjs-app-router.
Prerequisites
Before you begin, you will need to have an existing Next.js project set up using the App Router.
Authenticate
In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the portal and visiting the API keys page.
In the sample code below, the API key and Secret key have been saved to environment variables. Avoid hard coding these values in your project to prevent them from being leaked.
Setup Context Provider
After fetching our access token we can pass it to our ClientComponent
. First we set up the <VoiceProvider/>
so that our Messages
and Controls
components can access the context. We also pass the access token to the auth
prop of the <VoiceProvider/>
for setting up the WebSocket connection.
Starting session
In order to start a session, you can use the connect
function. It is important that this event is attached to a user interaction event (like a click) so that the browser is capable of playing Audio.