Available diagnosis views
Conversation variables
Displays live variable values captured during the call (e.g., booking IDs, customer names, flags).Flows and steps
Tracks the agent’s navigation through flows and steps, showing the execution path and decisions made.Function calls
View the functions the agent triggered during the call, including call parameters and outcomes.
LLM Request
Shows the underlying large-language-model (LLM) request made by the agent for a given turn, where applicable.Topic citations
Highlights the knowledge base topics the agent used to generate each response.
Transcript corrections
Displays where the automatic transcript was edited for clarity or accuracy.Turn latency
Measures how long the agent took to respond at each turn.Latency visualization now includes detailed breakdowns to help identify performance bottlenecks.
Latency breakdown
When viewing turn latency, you can inspect timing for:| Component | Description |
|---|---|
| LLM requests | Time spent waiting for the language model to generate a response |
| Function calls | Time spent executing functions, including API calls and data processing |
| Total response time | Combined time from user speech end to agent response start |
- Identify slow function calls that need optimization
- Understand LLM response times for different query types
- Find bottlenecks causing user-perceived delays
- Compare latency across different conversation types
Interruptions
Shows when the caller interrupted the agent or when barge-in was detected.Variants
Identifies which variant handled each part of the call.Logs
Displays function logs and any structuredconv.log entries emitted during runtime.
Entities
Lists extracted entities captured from the user, like booking numbers, account IDs, or city names. This is especially useful in transactional scenarios where the agent needs to capture structured data from free text.Using diagnosis for optimization
Combine multiple diagnosis views to understand agent behavior:- Enable Turn latency to identify slow responses.
- Check Function calls for those turns to see if external calls are causing delays.
- Review LLM Request to understand prompt complexity.
- Use Flows and steps to verify the agent followed the expected path.
Related pages
- Conversation review – Full conversation analysis
- Performance monitoring – Ongoing performance management
- Functions – Build and optimize functions

