Skip to main content
The Conversation Diagnosis menu on the Conversation Review page provides deeper insights into the agent’s behavior and decision-making at each turn. Toggle different data layers to inspect how the agent understood the user and responded. diagnosis-menu

Available diagnosis views

Conversation variables

Displays live variable values captured during the call (e.g., booking IDs, customer names, flags).

Flows and steps

Tracks the agent’s navigation through flows and steps, showing the execution path and decisions made.

Function calls

View the functions the agent triggered during the call, including call parameters and outcomes. conversations-function

LLM Request

Shows the underlying large-language-model (LLM) request made by the agent for a given turn, where applicable.

Topic citations

Highlights the knowledge base topics the agent used to generate each response. conversations-topics

Transcript corrections

Displays where the automatic transcript was edited for clarity or accuracy.

Turn latency

Measures how long the agent took to respond at each turn.
Latency visualization now includes detailed breakdowns to help identify performance bottlenecks.

Latency breakdown

When viewing turn latency, you can inspect timing for:
ComponentDescription
LLM requestsTime spent waiting for the language model to generate a response
Function callsTime spent executing functions, including API calls and data processing
Total response timeCombined time from user speech end to agent response start
Use these breakdowns to:
  • Identify slow function calls that need optimization
  • Understand LLM response times for different query types
  • Find bottlenecks causing user-perceived delays
  • Compare latency across different conversation types
High LLM latency may indicate complex prompts that could be simplified. High function call latency often points to slow external API dependencies.

Interruptions

Shows when the caller interrupted the agent or when barge-in was detected.

Variants

Identifies which variant handled each part of the call.

Logs

Displays function logs and any structured conv.log entries emitted during runtime.

Entities

Lists extracted entities captured from the user, like booking numbers, account IDs, or city names. This is especially useful in transactional scenarios where the agent needs to capture structured data from free text.

Using diagnosis for optimization

Combine multiple diagnosis views to understand agent behavior:
  1. Enable Turn latency to identify slow responses.
  2. Check Function calls for those turns to see if external calls are causing delays.
  3. Review LLM Request to understand prompt complexity.
  4. Use Flows and steps to verify the agent followed the expected path.