Skip to main content
Before you begin, make sure you’ve signed up for an account.
Get your AI agent up and running in four steps. This guide covers the essentials—create an agent, add knowledge, test it, and deploy.

Prerequisites

  • A PolyAI account with access to Agent Studio
  • Basic information about your use case (e.g., customer support, reservations, FAQ)
1

Create your agent

Click + Agent from the home page.Configure the basics:
  • Agent name – Internal identifier for your project
  • Response language – Primary language for responses (see multilingual support for additional languages)
  • Voice – Select from available TTS (text-to-speech) voices
  • Welcome greeting – First message callers hear (can be customized later in agent settings)
Click Next to enter Agent Studio.create-agent
You can also duplicate an existing agent by clicking the three-dot menu next to any agent on the home page.
2

Add knowledge

Navigate to Managed Topics in the left sidebar.Click Add topic and provide:
  • Topic name – What this topic covers (e.g., “Store hours”)
  • Sample questions – 5–10 ways users might ask (e.g., “When are you open?”)
  • Answer – The response your agent should give
Click Save to create the topic.add-kb-item
Changes are saved as Drafts. Publish to Sandbox to test them. Learn more about environments and versions.
Optional: Add more topics to expand your agent’s capabilities. You can also:See the full Managed Topics guide for details on how RAG (retrieval-augmented generation) powers topic matching.
3

Test your agent

Click the phone icon in the top-right corner to start an in-browser call.Select Sandbox from the environment dropdown and begin speaking.test-callYou can also test via:See Agent chat for more testing options. Review conversations in the Conversations dashboard.
4

Deploy to production

Once testing is complete, promote your agent through the deployment pipeline:
  1. Go to Environments and Versions in the left sidebar
  2. Click Promote to Pre-release for user acceptance testing
  3. Click Promote to Live to make your agent production-ready
Each environment can have its own phone number and configuration.active-publish
You can roll back to any previous version if issues arise. See the deployment pipeline guide for details.

Next steps

How your agent works

When a user interacts with your agent, the following happens:
  1. User speaks or types – Audio is captured (voice) or text is received (chat)
  2. ASR transcribes – Speech is converted to text (voice only). Learn more about speech recognition.
  3. LLM processes – The model retrieves relevant knowledge and generates a response. Configure which model to use in agent settings.
  4. TTS synthesizes – Text is converted back to speech (voice only). Customize voices in voice configuration.
  5. Response delivered – The agent replies to the user
This cycle repeats for each turn in the conversation. See processing order for a detailed breakdown.
Adjust the Interaction Style slider under Audio Management to tune latency and response behavior. Choose between Turbo, Swift, Balanced, or Precise.