EVI TypeScript Quickstart Guide

A quickstart guide for implementing the Empathic Voice Interface (EVI) with TypeScript.

This guide provides instructions for integrating EVI into your TypeScript projects. It includes detailed steps for using EVI with Next.js (App Router), Next.js (Pages Router), and a standalone setup without any framework.

Kickstart your project with our pre-configured Vercel template for the Empathic Voice Interface. Install with one click to instantly set up a ready-to-use project and start building with TypeScript right away!

This tutorial utilizes Hume’s React SDK to interact with EVI. It includes detailed steps for both the App Router in Next.js and is broken down into four key components:

  1. Authentication: Generate and use an access token to authenticate with EVI.
  2. Setting up context provider: Set up the <VoiceProvider/>.
  3. Starting a chat and display messages: Implement the functionality to start a chat with EVI and display messages.
  4. That’s it!: Audio playback and interruptions are handled for you.

The Hume React SDK abstracts much of the logic for managing the WebSocket connection, as well as capturing and preparing audio for processing. For a closer look at how the React package manages these aspects of the integration, we invite you to explore the source code here: @humeai/voice-react

To see this code fully implemented within a frontend web application using the App Router from Next.js, visit this GitHub repository: evi-nextjs-app-router.

1

Prerequisites

Before you begin, you will need to have an existing Next.js project set up using the App Router.

2

Authenticate

In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the portal and visiting the API keys page.

In the sample code below, the API key and Secret key have been saved to environment variables. Avoid hard coding these values in your project to prevent them from being leaked.

React
1// ./app/page.tsx
2import ClientComponent from "@/components/ClientComponent";
3import { fetchAccessToken } from "hume";
4
5export default async function Page() {
6 const accessToken = await fetchAccessToken({
7 apiKey: String(process.env.HUME_API_KEY),
8 secretKey: String(process.env.HUME_SECRET_KEY),
9 });
10
11 if (!accessToken) {
12 throw new Error();
13 }
14
15 return <ClientComponent accessToken={accessToken} />;
16}
3

Setup Context Provider

After fetching our access token we can pass it to our ClientComponent. First we set up the <VoiceProvider/> so that our Messages and Controls components can access the context. We also pass the access token to the auth prop of the <VoiceProvider/> for setting up the WebSocket connection.

TypeScript
1// ./components/ClientComponent.tsx
2"use client";
3import { VoiceProvider } from "@humeai/voice-react";
4import Messages from "./Messages";
5import Controls from "./Controls";
6
7export default function ClientComponent({
8 accessToken,
9}: {
10 accessToken: string;
11}) {
12 return (
13 <VoiceProvider auth={{ type: "accessToken", value: accessToken }}>
14 <Messages />
15 <Controls />
16 </VoiceProvider>
17 );
18}
4

Audio input

<VoiceProvider/> will handle the microphone and playback logic.

5

Starting session

In order to start a session, you can use the connect function. It is important that this event is attached to a user interaction event (like a click) so that the browser is capable of playing Audio.

TypeScript
1// ./components/Controls.tsx
2"use client";
3import { useVoice, VoiceReadyState } from "@humeai/voice-react";
4export default function Controls() {
5 const { connect, disconnect, readyState } = useVoice();
6
7 if (readyState === VoiceReadyState.OPEN) {
8 return (
9 <button
10 onClick={() => {
11 disconnect();
12 }}
13 >
14 End Session
15 </button>
16 );
17 }
18
19 return (
20 <button
21 onClick={() => {
22 connect()
23 .then(() => {
24 /* handle success */
25 })
26 .catch(() => {
27 /* handle error */
28 });
29 }}
30 >
31 Start Session
32 </button>
33 );
34}
6

Displaying message history

To display the message history, we can use the useVoice hook to access the messages array. We can then map over the messages array to display the role (Assistant or User) and content of each message.

TypeScript
1// ./components/Messages.tsx
2"use client";
3import { useVoice } from "@humeai/voice-react";
4
5export default function Messages() {
6 const { messages } = useVoice();
7
8 return (
9 <div>
10 {messages.map((msg, index) => {
11 if (msg.type === "user_message" || msg.type === "assistant_message") {
12 return (
13 <div key={msg.type + index}>
14 <div>{msg.message.role}</div>
15 <div>{msg.message.content}</div>
16 </div>
17 );
18 }
19
20 return null;
21 })}
22 </div>
23 );
24}
7

Interrupt

This Next.js example will handle interruption events automatically!