Skip to main content
This lesson explains what functions are, why they exist, and how the LLM uses them. By the end you will understand the full request-response loop and be able to create and reference a function correctly.

Why functions exist

The Agent tab (personality, role, rules) only gets you so far. Without functions, the agent can’t retrieve user data, execute actions, save state, or integrate with external systems. Prompt engineering also doesn’t scale — putting dozens of user journeys into one rules box gives the LLM too much context and makes it harder to reason about any single scenario. Functions solve both problems: external integration and fine-grained control over what the LLM knows and does at each step.

How LLMs use tools

Before looking at Agent Studio specifically, it helps to understand how LLMs use tools in general — because this pattern is the same across all modern AI systems. An LLM can produce two kinds of output:

Text output

The model speaks to the user using natural language.“The weather in Paris is usually mild in October.”

Tool call

The model communicates with a system — a function, API, or piece of code — to fetch data or trigger an action.call: get_weather with {city: "Paris"}
The LLM mediates between the user and the system. It takes a user request, decides whether it needs to call a tool to answer it, calls that tool, receives a result, and then reports the result back to the user as text.
User → LLM → tool call → system

User ← LLM ← result ← system
This is the core loop that powers almost everything beyond basic FAQ responses.

Creating a function in Agent Studio

In Agent Studio, tools are called functions. You write them in Python and they are available for the LLM to call during a conversation. To create a function, go to Build → Functions and click the + button. Every function has:
  • Name — how the function is identified (used by the LLM to decide when to call it)
  • Description — what the function does (also read by the LLM when deciding whether to call it)
  • Parameters — the inputs the function needs, each with a name, description, and type
  • Python code — what the function actually does when called
The LLM reads the function name, description, and parameter descriptions when deciding whether to call the function and what to pass as arguments. Name and describe everything clearly — this directly affects whether the LLM uses your function correctly.

Example: a simple addition function

def add_two_numbers(conv, first_number: float, second_number: float) -> str:
    total = first_number + second_number
    return f"The total is {total}"
Parameters:
  • first_numberThe first number the user wants to add (type: number)
  • second_numberThe second number the user wants to add (type: number)
Functions called by the LLM must return either a string or a dictionary with specific keys. Returning an integer, list, or other type will cause an error that the LLM will encounter and retry up to three times before giving up.

Making a function visible to the LLM

Creating a function is not enough. The LLM will not call a function it does not know exists. To make a function available to the LLM, you must reference it somewhere — in a topic action, a flow step, or directly in the Rules field using the @function_name syntax. When you reference a function, Agent Studio highlights it and registers it in the LLM request. The LLM will then see the function’s full definition (name, description, parameters) and can choose to call it.
A common mistake is creating a function and testing the agent, only to find the LLM never calls it. Check the LLM request in Conversation Diagnosis — if the function is not listed under “functions”, it has not been referenced anywhere.

What the LLM actually sees

You can inspect the LLM request directly in Conversation Review → Diagnosis → LLM requests. The request is structured like this:
SectionWhat it contains
Base system prompt — introYour Personality and Role fields concatenated
Base system prompt — rulesEverything in the Rules field
Context informationKnowledge base content retrieved for this turn (empty if no relevant topic matched)
Conversation historyThe full transcript so far, alternating assistant / user roles
FunctionsDefinitions of any functions that have been referenced
The LLM does not see the Python code inside the function. It only sees the function’s name, description, and parameters.
The Greeting is different — it is hard-coded text played at the start of every call and is not generated by the LLM. It does appear in conversation history so the LLM knows how the call opened, but no LLM request is made to produce it.
This means your function name, description, and parameter descriptions are all part of the prompt. Write them with the same care as any other prompt text.

The two-request pattern

When the LLM calls a function, it takes two LLM requests to produce the final response to the user.
1

Request 1: the LLM decides to call a function

The LLM receives the user input, sees the available function definitions, and outputs a tool call rather than text.The response content is empty. The function call object contains the function name and the parameter values the LLM extracted from the conversation.
2

The function runs

Agent Studio executes the Python function with the provided parameters and gets back a result.
3

Request 2: the LLM reports the result

The function result is inserted into the conversation history under a function role. The LLM sees this result and produces a text response to communicate the result to the user.
This is visible in Conversation Diagnosis. When a function is called, you will see two requests instead of one. Enable the Function calls toggle to see what parameters were passed and what was returned.

What the conversation history looks like after a function call

assistant: "Hi, thanks for calling. How can I help?"
user: "What's 2 plus 3?"
assistant: [calls add_two_numbers with first_number=2, second_number=3]
function: "The total is 5"
assistant: "2 plus 3 equals 5."
The function role is the third role alongside user and assistant. This is how function results are fed back into the LLM’s context.

Try it yourself

1

Create a function

In Build → Functions, create a function called get_store_hours with no parameters. Return a string like "The store is open Monday–Friday, 9am to 6pm.".
2

Reference it

In Build → Agent → Rules, add a reference to @get_store_hours. Confirm it highlights.
3

Test it

Open Chat and ask “What are your opening hours?” Check the LLM request in Conversation Diagnosis to confirm:
  • The function appears in the functions list
  • Two requests are shown (one tool call, one text response)
  • The function result appears in conversation history under the function role
Last modified on March 26, 2026