Choose which supplemental language model to use for EVI’s response generation.

EVI supports specifying a supplemental language model to power response text generation during chat sessions. This configuration option allows you to tailor conversational output by selecting one of the supported language models.

See our API reference for how to specify a language model in your EVI configuration.

Supported language models

Hume’s eLLM

Our proprietary empathic large language model, hume-evi-2, is a multimodal system that processes both language and expression measures. It generates natural language responses and guides text-to-speech prosody, delivering emotionally nuanced output. Thanks to its independent design, it produces an initial response faster than many external LLMs while EVI 2’s integrated voice-language architecture ensures coherent and contextually aware interactions with control over personality and speaking style.

External LLMs

Developers may also choose from leading external language models such as Anthropic’s Claude 3.5 Sonnet, and many others. For a complete list of external LLMs Hume natively supports, see our API Reference.

Custom language model

For specific application requirements, the API supports integrating custom language models, offering flexibility to tailor conversational behavior to your domain. For more details on leveraging a custom language model, see our custom language model guide

The cost of external supplemental LLMs is not included in EVI’s pricing.

Hume covers the costs of LLMs while we make optimizations that will make language generation much cheaper for our customers. This means that the LLM expenses are not included in EVI’s pricing, ensuring a single consistent price per minute regardless which supplemental LLM you choose. Developers can select any supported LLM without additional charges, making it easy to switch between different models based on your needs.


Built with