Guide to accessing chat history with the EVI.

EVI captures detailed histories of conversations, allowing developers to review and analyze past interactions. This guide provides an overview of Chats and Chat Groups, instructions for retrieving chat transcripts and expression measurements, and steps to access reconstructed audio.

If data retention is disabled, Chat history will not be recorded, and previous Chat data and audio reconstruction will not be retrievable.

Chats vs Chat Groups

EVI organizes conversation history into Chats and Chat Groups.

  • Chats: Represents a single session from the moment a WebSocket connection is established until it disconnects. Each Chat contains messages and events recorded during that specific session.
  • Chat Groups: Links related chats to provide continuity across interactions. A group can contain one or more chats, allowing ongoing conversations to be tracked even when users reconnect to continue from a previous interaction.

When a new Chat session begins, it creates a new Chat Group by default. However, if the Chat resumes a previous session, it is added to the existing Chat Group, ensuring the conversation’s history and context are preserved across multiple Chats.

Fetching Chats and Chat Groups

Each Chat has a unique ID and a chat_group_id field that links it to its associated Chat Group. Similarly, each Chat Group has its own unique ID, enabling the retrieval of individual sessions or entire groups of related sessions.

  • Chat ID: To obtain a Chat ID, use the list Chats endpoint. This ID allows you to retrieve details of individual sessions or resume a previous Chat. See sample code for fetching Chats below:

    1curl -G https://api.hume.ai/v0/evi/chats \
    2 -H "X-Hume-Api-Key: <YOUR_API_KEY>" \
    3 -d page_number=0 \
    4 -d page_size=10 \
    5 -d ascending_order=false
  • Chat Group ID: Each Chat includes a chat_group_id field, which identifies the Chat Group it belongs to. To obtain a Chat Group ID directly, use the list Chat Groups endpoint. This ID is useful for accessing all Chats linked to a single conversation that spans multiple sessions. See sample code for fetching Chats below:

    1curl -G https://api.hume.ai/v0/evi/chat_groups \
    2 -H "X-Hume-Api-Key: <YOUR_API_KEY>" \
    3 -d page_number=0 \
    4 -d page_size=1 \
    5 -d ascending_order=false

While you can retrieve these IDs using the API, the Chat and Chat Group IDs are also included at the start of every Chat session in a chat_metadata message. This is particularly useful if your integration needs to associate data or actions with Chats as they are initiated.

chat_metadata
1{
2 "type": "chat_metadata",
3 "chat_group_id": "369846cf-6ad5-404d-905e-a8acb5cdfc78",
4 "chat_id": "470a49f6-1dec-4afe-8b61-035d3b2d63b0",
5 "request_id": "73c75efd-afa2-4e24-a862-91096b0961362258039"
6}

Viewing Chats in the Platform UI

You can also view chat history and obtain Chat IDs through the Platform UI:

  1. Go to the Chat history page for a paginated list of past Chats, each displaying key details like the Chat ID, datetime, event count, and duration.

    Platform UI chat history page
  2. Click “Open” on any Chat to view its details. The details page includes information such as status, start and end timestamps, duration, the Chat ID, Chat Group ID, associated Config ID (if any), and a paginated list of Chat Events.

    Platform UI chat details page

Chat Events

During each Chat session, EVI records events that detail interactions between the user and the system. These events provide a complete record of user input, assistant responses, tool usage, and system commands, enabling developers to review transcripts, analyze activity, and extract expression measurements. Below is the complete list of WebSocket messages recorded as Chat Events:

These events cannot be modified and represent an immutable record of the conversation for transcription and analysis purposes.

Fetching Chat Events

The Chat Events API provides endpoints to fetch events for a specific Chat or a Chat Group, allowing developers to retrieve detailed session data. Below are examples of how to use these endpoints:

Fetching chat events for a specific Chat

Use the /chats/{chat_id}/events endpoint to fetch events for a single Chat:

1curl -G https://api.hume.ai/v0/evi/chats/<YOUR_CHAT_ID> \
2 -H "X-Hume-Api-Key: <YOUR_API_KEY>" \
3 -d page_number=0 \
4 -d page_size=10 \
5 -d ascending_order=false

Fetching events for a specific Chat Group

Use the /chat_groups/{chat_group_id}/events endpoint to fetch events from all Chats within a specific Chat Group:

1curl -G https://api.hume.ai/v0/evi/chats/<YOUR_CHAT_GROUP_ID> \
2 -H "X-Hume-Api-Key: <YOUR_API_KEY>" \
3 -d page_number=0 \
4 -d page_size=10 \
5 -d ascending_order=false

Parsing Chat Events

Chat Events provide a detailed record of interactions during a Chat session, capturing both transcriptions and expression measurement predictions. This section demonstrates how to process these events to generate readable transcripts and analyze emotional trends.

For sample code demonstrating how to fetch and parse Chat Events, explore our example projects in TypeScript and Python.

Chat transcription

Transcriptions of a conversation are stored in user_message and assistant_message events. These events include the speaker’s role and the corresponding text, allowing you to reconstruct the dialogue into a readable format.

For instance, you may need to create a transcript of a conversation for documentation or analysis. Transcripts can help review user intent, evaluate system responses, or provide written records for compliance or training purposes.

The following example demonstrates how to extract the Chat transcription from a list of Chat Events and save it as a text file named transcription_<CHAT_ID>.txt:

1import fs from "fs";
2import { ReturnChatEvent } from "hume/api/resources/empathicVoice";
3
4function generateTranscript(chatEvents: ReturnChatEvent[]): void {
5 // Filter events for user and assistant messages
6 const relevantChatEvents = chatEvents.filter(
7 (chatEvent) => chatEvent.type === "USER_MESSAGE" || chatEvent.type === "AGENT_MESSAGE"
8 );
9
10 // Map each relevant event to a formatted line
11 const transcriptLines = relevantChatEvents.map((chatEvent) => {
12 const role = chatEvent.role === "USER" ? "User" : "Assistant";
13 const timestamp = new Date(chatEvent.timestamp).toLocaleString(); // Human-readable date/time
14 return `[${timestamp}] ${role}: ${chatEvent.messageText}`;
15 });
16
17 // Join all lines into a single transcript string
18 const transcript = transcriptLines.join("\n");
19 // Define the transcript file name
20 const transcriptFileName = `transcript_${CHAT_ID}.txt`;
21 // Write the transcript to a text file
22 try {
23 fs.writeFileSync(transcriptFileName, transcript, "utf8");
24 console.log(`Transcript saved to ${transcriptFileName}`);
25 } catch (fileError) {
26 console.error(`Error writing to file ${transcriptFileName}:`, fileError);
27 }
28}

Expression measurement

Expression measurement predictions are stored in the user_message events under the models.prosody.scores property. These predictions provide confidence levels for various emotions detected in the user’s speech.

For example, you might want to gauge the emotional tone of a conversation to better understand user sentiment. This information can guide customer support strategies or highlight trends in the expression measurement predictions over time.

The following example calculates the top 3 emotions from the user_messages by averaging their emotion scores across the Chat session:

1import { ReturnChatEvent, EmotionScores } from "hume/api/resources/empathicVoice";
2
3function getTopEmotions(chatEvents: ReturnChatEvent[]): Partial<EmotionScores> {
4 // Extract user messages that have emotion features
5 const userMessages = chatEvents.filter(
6 (event) => event.type === "USER_MESSAGE" && event.emotionFeatures
7 );
8
9 const totalMessages = userMessages.length;
10
11 // Infer emotion keys from the first user message
12 const firstMessageEmotions = JSON.parse(userMessages[0].emotionFeatures!) as EmotionScores;
13 const emotionKeys = Object.keys(firstMessageEmotions) as (keyof EmotionScores)[];
14
15 // Initialize sums for all emotions to 0 (no extra type assertions needed)
16 const emotionSums: Record<keyof EmotionScores, number> = Object.fromEntries(
17 emotionKeys.map((key) => [key, 0])
18 ) as Record<keyof EmotionScores, number>;
19
20 // Accumulate emotion scores from each user message
21 for (const event of userMessages) {
22 const emotions = JSON.parse(event.emotionFeatures!) as EmotionScores;
23 for (const key of emotionKeys) {
24 emotionSums[key] += emotions[key];
25 }
26 }
27
28 // Compute average scores for each emotion
29 const averageEmotions = emotionKeys.map((key) => ({
30 emotion: key,
31 score: emotionSums[key] / totalMessages,
32 }));
33
34 // Sort by average score (descending) and pick the top 3
35 averageEmotions.sort((a, b) => b.score - a.score);
36 const top3 = averageEmotions.slice(0, 3);
37
38 // Build a Partial<EmotionScores> with only the top 3 emotions
39 const result: Partial<EmotionScores> = {};
40 for (const { emotion, score } of top3) {
41 result[emotion] = score;
42 }
43
44 return result;
45}

Built with