Skip to main content
Voice settings control how your agent sounds when speaking to callers. You configure voice from Channels > Voice in the sidebar. For voice interactions, the agent converts its text responses into speech using text-to-speech (TTS) providers like ElevenLabs, Cartesia, Hume, and others. Voice settings let you choose which voice to use and how it sounds — including stability, clarity, and speed.
For the best voice experience, pair your voice settings with PolyAI’s Raven LLM — it is purpose-built for conversational voice AI and produces responses that sound natural when spoken aloud.

Why voice matters

Voice is one of the strongest signals of trust and professionalism a caller receives. A well-matched voice reinforces your brand identity, while a mismatched one can undermine an otherwise well-built agent. Consider:
  • Accent and region — Match the voice to your caller base. A UK audience expects a different accent than a US one.
  • Tone and texture — Professional services benefit from calm, authoritative voices. Hospitality and retail may suit warmer, more conversational tones.
  • Consistency — Adjust stability settings to control how much the voice varies between turns. Higher stability sounds more predictable; lower stability sounds more natural.

Where to start

If you are setting up voice for the first time, start with Choosing a good voice for selection guidance, then configure your choice in Agent Voice.

Voice pages

Agent Voice

Select a voice and fine-tune stability, clarity, and other parameters for your agent and disclaimer voices.

Voice library

Browse, preview, and compare all available voices across providers.

Choosing a good voice

Best-practice guidelines for matching voice to your brand, audience, and industry.

Multi-voice

Assign multiple voices to simulate a team of agents within a single project.

Add a voice

Configure voices programmatically using provider classes.

Custom voice

Request a brand-exclusive cloned voice (enterprise).

Voice configuration

Configure greeting audio, disclaimer playback, and call handling settings.
You can also configure voices programmatically using the voice class inside functions.

Voice conversation style guide

These guidelines help your voice agent sound natural rather than robotic. They focus on the linguistic patterns that make spoken conversations feel human.

Social presence markers

Natural conversation includes patterns that acknowledge conversational history and participants. These contribute to a sense of collaboration rather than rote routine-following. Use progressive tense for active collaboration:
  • “I’m not seeing any accounts under that phone number…” conveys active collaboration
  • “I don’t see any accounts” sounds too definitive
Reference shared context implicitly — don’t restate what both parties already know:
  • “How about Wednesday instead?” (not “How about Wednesday instead of Tuesday?”)
  • “In that case, how does Saturday at 2:30 sound?” (not “Since you said you prefer weekends…”)
Vary confirmationals — use a mix of “Great,” “Okay,” “Perfect,” and “Sure” rather than repeating the same one. Use conversational datives for a collaborative feel:
  • “Could you read me your account number?” rather than “Could you read your account number aloud?”
  • “Can you log into your account for me?” rather than “Can you log into your account?”
Use face-saving past tense when referencing a user’s request:
  • “When were you trying to come in?” rather than “When are you trying to come in?”

Avoid over-explaining

LLMs tend to justify every action in a way humans don’t. Most of the time, the important information and the request can be formed into a single sentence:
  • “No problem, what’s your account number?” rather than “In order to check for outages, I’ll need to look up your account. Could you tell me your account number?”

Walkthrough conversations

When giving multi-turn walkthroughs, don’t end every step with “let me know when you’ve done that.” Provide the instruction and wait — the user will confirm on their own.
Last modified on March 31, 2026