What is the Conversation Simulator?
The Conversation Simulator is a Galtea feature for testing any AI system that interacts with users through multi-turn conversations — from customer-facing chatbots to autonomous AI agents that execute tasks, call tools, and make decisions. It programmatically generates realistic user messages to simulate dialogues, helping you evaluate your AI’s ability to handle complex interactions, maintain context, and achieve user goals. Each simulation is guided by a Behavior Test Case containing a user persona, goal, scenario, and stopping criteria. You can run simulations directly viasimulator.simulate() or as part of the specification-driven evaluations.run() workflow, which handles simulation and evaluation together.
Typical Uses
- Dialogue flow testing — Verify your AI produces coherent, natural conversations across multiple turns
- Role adherence — Confirm your AI stays in character and follows its assigned role
- Task completion — Test whether your AI guides users to successfully complete their goals
- Agent workflow validation — Evaluate whether your AI agent calls the right tools, follows the correct steps, and reaches the expected outcome
- Robustness — Evaluate how your AI handles unexpected, off-topic, or challenging user inputs
Communication Strategies
The simulator supports two communication strategies that control how the simulated user messages are generated:- Written (default) — Concise, text-based messages mimicking real chat or messaging interactions
- Spoken — Natural speech patterns with filler words, hesitations, and a conversational tone
Simulation Result Structure
Thesimulate() method returns a SimulationResult object containing the complete conversation history and metadata:
| Field | Type | Description |
|---|---|---|
session_id | str | The identifier for the simulation session. |
total_turns | int | Total number of conversation turns in the simulation. |
messages | List[ConversationMessage] | The full message history. Each message has role, content, and optional retrieval_context and metadata. |
finished | bool | Whether the simulation ended naturally (True) or was stopped (False). |
stopping_reason | Optional[str] | If stopped, the reason why the simulation ended. |
metadata | Optional[Dict] | Additional simulation metadata (when include_metadata=True). |
Example Output
SDK Integration
Conversation Simulator SDK
Simulate conversations using the Python SDK
Related
Behavior Tests
The scenario-based tests that drive conversation simulations.
Simulating Conversations Tutorial
Step-by-step guide to running multi-turn conversation simulations.
Specification-Driven Evaluations
Run behavior evaluations automatically from specifications using
evaluations.run().