EVI Next.js Quickstart

A quickstart guide for implementing the Empathic Voice Interface (EVI) with Next.js.

With Hume’s React SDK, WebSocket connection management is handled for you and the complexities of audio capture, playback, and streaming are abstracted away. You can integrate EVI into your React app with just a few hooks and components, without writing any low-level WebSocket or audio code.

In this guide, you’ll learn how to integrate EVI into your Next.js applications using Hume’s React SDK, with step-by-step instructions for both the App Router and the Pages Router.

This guide is broken up into five sections:

  1. Installation: Install Hume SDK packages.
  2. Authentication: Generate and use an access token to authenticate with EVI.
  3. Context provider: Set up the <VoiceProvider/>.
  4. Connection: Open a WebSocket connection and start a chat with EVI.
  5. Display chat: Display chat messages in the UI.

Before you begin, you’ll need an existing Next.js project.

Installation

Install Hume’s React SDK and TypeScript SDK packages.

1pnpm i @humeai/voice-react hume

Authentication

Generate an access token for authentication. Doing so will require your API key and Secret key. These keys can be obtained by logging into the portal and visiting the API keys page.

Load your API key and secret from environment variables. Avoid hardcoding them in your code to prevent credential leaks and unauthorized access.

In your root component, use the TypeScript SDK’s fetchAccessToken method to fetch your access token.

./app/page.tsx
1import dynamic from "next/dynamic";
2import { fetchAccessToken } from "hume";
3
4const Chat = dynamic(() => import("@/components/Chat"), {
5 ssr: false,
6});
7
8export default async function Page() {
9 const accessToken = await fetchAccessToken({
10 apiKey: String(process.env.HUME_API_KEY),
11 secretKey: String(process.env.HUME_SECRET_KEY),
12 });
13
14 return (
15 <div className={"grow flex flex-col"}>
16 <Chat accessToken={accessToken} />
17 </div>
18 );
19}

Context Provider

After fetching our access token we can pass it to our Chat component. First we set up the <VoiceProvider/> so that our Messages and StartCall components can access the context.

We also pass the access token to the accessToken prop of the StartCall component for setting up the WebSocket connection.

./app/page.tsx
1import { VoiceProvider } from "@humeai/voice-react";
2import Messages from "./Messages";
3import StartCall from "./StartCall";
4
5export default function Chat({
6 accessToken,
7}: {
8 accessToken: string;
9}) {
10 return (
11 <VoiceProvider>
12 <Messages />
13 <StartCall accessToken={accessToken}/>
14 </VoiceProvider>
15 );
16}

Connection

Use the useVoice hook’s connect method for starting a Chat session. It is important that this event is attached to a user interaction event (like a click) so that the browser is capable of recording and playing back audio.

Implementing this step is the same whether you are using the App Router or Pages Router.

./components/StartCall.tsx
1"use client";
2import {
3 useVoice,
4 ConnectOptions,
5 VoiceReadyState
6} from "@humeai/voice-react";
7
8export default function StartCall({
9 accessToken,
10}: {
11 accessToken: string;
12}) {
13 const { connect, disconnect, readyState } = useVoice();
14
15 if (readyState === VoiceReadyState.OPEN) {
16 return (
17 <button
18 onClick={() => {
19 disconnect();
20 }}
21 >
22 End Session
23 </button>
24 );
25 }
26
27 return (
28 <button
29 onClick={() => {
30 connect({
31 auth: { type: "accessToken", value: accessToken }
32 })
33 .then(() => {
34 /* handle success */
35 })
36 .catch(() => {
37 /* handle error */
38 });
39 }}
40 >
41 Start Session
42 </button>
43 );
44}

Display chat

Use the useVoice hook to access the messages array. We can then map over the messages array to display the role (Assistant or User) and content of each message.

Implementing this step is the same whether you are using the App Router or Pages Router.

./components/Messages.tsx
1import { useVoice } from "@humeai/voice-react";
2
3export default function Messages() {
4 const { messages } = useVoice();
5
6 return (
7 <div>
8 {messages.map((msg, index) => {
9 if (msg.type === "user_message" || msg.type === "assistant_message") {
10 return null;
11 }
12
13 return (
14 <div key={msg.type + index}>
15 <div>{msg.message.role}</div>
16 <div>{msg.message.content}</div>
17 </div>
18 );
19 })}
20 </div>
21 );
22}

Next steps

Congratulations! You’ve successfully integrated EVI using Hume’s React SDK.

Next, consider exploring these areas to enhance your EVI application:

For further details and practical examples, explore the API Reference and our Hume API Examples on GitHub.