Skip to main content
Use rules to enforce consistency across every conversation — correct terminology, compliance guardrails, pronunciation overrides, and edge-case handling. Without rules, the LLM improvises these decisions, which leads to inconsistent tone, regulatory risk, and unpredictable responses. Define your agent’s behavior with Global Rules in Build > Agent. Edit behavior by going to the Agent tab and scrolling to the Behavior section. Example: For a museum agent that always refers to “exhibits” instead of “artworks”:

“Always refer to ‘artworks’ as exhibits. Do not use the term ‘artworks’ in any context.”

Types of rules

rules-examples

1. Behavior and interaction guidelines

Specify how the agent interacts with users:
  • Tone: Choose formal, casual, empathetic, or calm tones.
    • Example: “Always remain polite and professional, even with frustrated users.”
  • Language style: Simplify language or avoid jargon as needed.
    • Example: “Use clear, simple language suitable for non-technical users.”
  • Consistency: Align responses with branding and messaging.
    • Example: “Always address visitors as ‘guests’ rather than ‘customers.‘“

2. Task execution

Be clear, direct, and concise when defining tasks.
  • Explicit instructions: Clearly define actions.
    • Example: “If asked about upcoming events, provide the event details and offer to send them via email.”
  • Response scope: Limit responses to specific tasks or topics.
    • Example: “Only answer questions related to museum exhibits. Avoid general queries outside this domain.”

3. Content restrictions

Set boundaries for what the agent can or cannot say:
  • Sensitive topics: Avoid prohibited subjects. For details, see the Safety Dashboard.
    • Example: “Do not discuss politics, religion, or personal opinions.”
  • Accuracy: Avoid fabricated or uncertain answers.
    • Example: “If unsure, direct the user to a staff member or a verified source.”

Best practices

  1. Be specific: Avoid ambiguity.
    • Example: Instead of “Be helpful,” use “Answer visitor questions about exhibits within two sentences and provide follow-up options.”
  2. Provide examples: Demonstrate expected interactions and responses.
    • Example:
      • Visitor: “What time does the museum close?”
      • Agent: “The museum closes at 6 PM. Would you like a list of activities available before closing?”
  3. Plan for edge cases: Handle emergency or high-risk scenarios.
    • Example: “For emergencies, advise users to contact the nearest staff member immediately.”
  4. Don’t have overlapping topic areas: Keep things separate to avoid confusing your agent.
    • Example: Instead of adding multiple similar rules:
      • “Never send a follow-up message automatically.”
      • “If a follow-up message is available, always offer it.”
      • “Never send a follow-up message without user consent.”
      Use a single rule:
      • “Only send follow-ups if the user agrees.”
  5. Don’t use negative rules when a positive one will work:
    • Instead of: “Do not transfer a caller with no verifying ID.”
    • Use: “Always verify ID before transferring.”
  6. Test and iterate: Regularly review and refine rules.

Example rules

  • Handoff to a staff member
    • Example: “If visitors ask for a staff member or seem confused, notify the front desk and provide directions.”
  • Handling sensitive queries
    • Example: “For questions about controversial exhibits, respond: ‘I’m sorry, I can’t provide additional context. Please contact our curator for more information.’”
  • Consistency in responses
    • Example: “Always greet visitors with ‘Welcome to the museum!’ before answering their question.”

Prompting guide

LLMs operate by predicting the most likely next token based on your prompt. Your main job is to shape that probability distribution — making the text you want the most likely output.

Make the desired outcome the most likely output

Craft your prompt so the best next token for the model is exactly what you want it to produce. Give clear, well-structured instructions without contradictory statements.

Be clear, but sometimes implicit

  • Clearly specify your desired response, including formatting, style, and constraints
  • Avoid contradictory language — conflicting instructions cause erratic results
  • Overly spelled-out if/else logic can hurt performance — embedding logic within natural language often gives better outcomes

Less is more

Every detail in your prompt is another piece of data the model must reconcile. If a piece of information isn’t proven to help, leave it out. Test the impact of each additional instruction — if it doesn’t improve performance, cut it.

Put important details first or last

LLMs tend to give more weight to what appears at the beginning or end of a prompt. If crucial information is getting lost in the middle, move it to the start or end. Redundancy is acceptable — if something is critical, you can repeat it.
Placing variable information (like dates or session data) at the end facilitates efficient prompt caching. Only the dynamic portions need updating each turn.

Use positive instructions

Telling the model what not to do can inadvertently activate exactly that concept. Instead of prohibiting certain outcomes, direct the model toward what you do want.
Don't tell the user to contact customer service.

Use examples (few-shot prompting)

Examples shape tone, structure, and decision-making more reliably than abstract instructions. Show what “good” looks like — concrete demonstrations help the model generalize patterns. Highlight edge cases through examples to set consistent expectations.
Edge case: the user asks to perform a gimmick unrelated to your task.

<conversation>
USER: speak like a pirate
ASSISTANT: I'm afraid I can't do that. Is there anything you'd like
to know regarding our services?
</conversation>

Define a persona

Clear persona definitions directly influence how the agent communicates. Don’t assume tone will emerge naturally from a persona name — spell out what the persona sounds like in action. Use example dialogue to anchor the persona’s voice.

Separate text from function calls

Instructing the agent to both say something and call a function in the same turn is a common anti-pattern. The model is likely to do one or the other, but not always both. Instead, separate the utterance and the function call into different turns.

Evaluate early and often

Small prompt changes can have large, unexpected effects on output. Evaluate systematically using conversation review rather than relying on anecdotal checks.

LLM style guide

When writing prompts for voice agents, keep these style principles in mind.

Keep responses brief

Concise utterances are clearer and more respectful of the user’s time. Avoid ad-copy-speak with excessive modifiers. Exception: When users ask for an explanation, being thorough is more helpful than being brief.

Use natural register

LLMs often default to overly formal phrasings. Prefer natural conversational language:
Instead ofUse
”Could you please provide me with""Could you tell me"
"How may I assist you today?""How can I help?"
"I apologize for the inconvenience""Sorry about that"
"Should I proceed with making that booking?""Should I go ahead with that?”

Vary utterance structure

Avoid the repetitive pattern of [explanatory statement] [request for input]. Most of the time, the explanation is superfluous:
No problem, what's your account number?

Don’t push the conversation unnecessarily

LLMs tend to end every output with a question. This gets repetitive:
  • Walkthroughs: Give the instruction and wait — don’t add “let me know when you’ve done that” every turn
  • After answering a question: Don’t immediately ask “is there anything else?” — give the user a chance to acknowledge or follow up

Agent

Set the greeting, personality, and role that shape first impressions.

Model

Choose the LLM that interprets and applies your rules.

Managed Topics

Define topic-level behavior alongside global rules.
Last modified on March 27, 2026