Use example-driven prompts inside flow steps to improve accuracy and reduce ambiguity
Few-shot prompting (FSP) is a technique for guiding the LLM by showing it examples of what users might say — and how the agent should respond. Inside a flow step, this helps the agent:
Match vague or unexpected inputs to the correct function call
Extract values in tricky formats (e.g., spelled names, long reference codes)
Avoid asking unnecessary questions when the value is already present
Maintain a consistent tone, phrasing, or logic pattern
The listed functions (names, descriptions, arguments)
It does not see previous step prompts or conversation state unless you surface them.That means each step must stand alone — and few-shot prompting fills in the gaps by giving the model examples to reason from.
Because step prompts are inserted last in the LLM input stack, FSP examples appear directly before the model generates its next turn — making them highly influential.
A matching agent behavior — often a response + function call
Place these inside the step prompt, either inline or at the top before your main instructions.You don’t need dozens of examples — usually 2–3 is enough, especially if you cover: