通过对话模拟,可以生成综合多轮对话来测试聊天 AI 代理。 可以定义测试方案,让 MLflow 自动模拟真实的用户交互,而不是手动创建测试对话或等待生产数据。
注释
对话模拟是 实验性的。 API 和行为可能会在将来的版本中更改。
先决条件
安装 MLflow 3.10.0 或更高版本:
pip install --upgrade 'mlflow[databricks]>=3.10'
为什么模拟对话?
| 方面 | 手动测试数据 | 对话模拟 |
|---|---|---|
| 版本测试 | 无法使用相同的对话重复测试新的代理版本 | 跨代理版本重演相同场景 |
| 覆盖 | 受人类创造力和时间的限制 | 大规模生成不同的边缘事例 |
| 一致性 | 手动测试创建中的变体 | 具有已定义目标的可重现测试方案 |
| Scale | 创建许多测试用例非常耗时 | 立即生成数百个对话 |
| 维护 | 必须手动更新测试数据 | 要求更改时重新生成对话 |
对话模拟通过根据预定的目标和角色以编程方式生成对话来解决这些挑战,从而实现以下功能:
- 系统评估:测试具有一致目标和角色的不同代理版本
- 红队演练:对代理进行大规模不同用户行为的压力测试
- 快速迭代:要求更改时立即生成新的测试对话
Workflow
- 定义测试用例或从现有对话中提取 - 为每个模拟聊天指定目标、角色和上下文,或从生产会话生成它们。
-
创建模拟器 - 使用测试用例和配置进行初始化
ConversationSimulator。 - 定义代理 - 在接受会话历史记录的函数中实现代理。
-
运行评估 - 将模拟器传递给
mlflow.genai.evaluate()评分者。
快速入门
下面是模拟对话并评估对话的完整示例:
import mlflow
from mlflow.genai.simulators import ConversationSimulator
from mlflow.genai.scorers import ConversationCompleteness, Safety
from openai import OpenAI
client = OpenAI()
# 1. Define test cases with goals (required) and optional persona/context
test_cases = [
{
"goal": "Successfully configure experiment tracking",
},
{
"goal": "Identify and fix a model deployment error",
"persona": "You are a frustrated data scientist who has been stuck on this issue for hours",
},
{
"goal": "Set up model versioning for a production pipeline",
"persona": "You are a beginner who needs step-by-step guidance",
"context": {
"user_id": "beginner_123"
}, # user_id is passed to predict_fn via kwargs
},
]
# 2. Create the simulator
simulator = ConversationSimulator(
test_cases=test_cases,
max_turns=5,
)
# 3. Define your agent function
def predict_fn(input: list[dict], **kwargs):
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=input,
)
return response.choices[0].message.content
# 4. Run evaluation with conversation and single-turn scorers
results = mlflow.genai.evaluate(
data=simulator,
predict_fn=predict_fn,
scorers=[
ConversationCompleteness(), # Multi-turn scorer
Safety(), # Single-turn scorer (applied to each turn)
],
)
定义测试用例
每个测试用例代表一个对话场景。 测试用例支持三个字段:
| 领域 | 必需 | 说明 |
|---|---|---|
goal |
是的 | 模拟用户试图实现的目标 |
persona |
否 | 用户个性和通信风格的说明 |
context |
否 | 传递给预测函数的其他 kwargs |
目标
该目标描述了模拟用户想要实现的目标。 它应该足够具体,以指导对话,但开放足以允许自然对话。 良好的目标描述了预期结果,以便模拟器知道用户意图何时完成:
# Good goals - specific, actionable, and describe expected outcomes
{"goal": "Successfully configure MLflow tracking for a distributed training job"}
{"goal": "Understand when to use experiments vs. runs in MLflow"}
{"goal": "Identify and fix why model artifacts aren't being logged"}
# Less effective goals - too vague, no expected outcome
{"goal": "Learn about MLflow"}
{"goal": "Get help"}
Persona
角色决定模拟用户的交流方式。 如果未指定,则使用默认有用的用户角色:
# Technical expert who asks detailed questions
{
"goal": "Reduce model serving latency below 100ms",
"persona": "You are a senior ML engineer who asks precise technical questions",
}
# Beginner who needs more guidance
{
"goal": "Successfully set up experiment tracking",
"persona": "You are new to MLflow and need step-by-step explanations",
}
# Frustrated user testing agent resilience
{
"goal": "Fix a deployment blocking production",
"persona": "You are impatient because this is blocking a release",
}
背景
上下文字段将其他参数传递给预测函数。 这适用于:
- 传递个性化设置的用户标识符
- 提供会话状态或配置
- 包括代理程序所需的元数据
{
"goal": "Get personalized model recommendations",
"context": {
"user_id": "enterprise_user_42", # user_id is passed to predict_fn via kwargs
"subscription_tier": "premium",
"preferred_framework": "pytorch",
},
}
定义测试用例
定义测试用例的最简单方法是字典或数据帧列表:
test_cases = [
{"goal": "Successfully configure experiment tracking"},
{"goal": "Debug a deployment error", "persona": "Senior engineer"},
{"goal": "Set up a CI/CD pipeline for ML", "context": {"team": "platform"}},
]
simulator = ConversationSimulator(test_cases=test_cases)
还可以使用 DataFrame:
import pandas as pd
df = pd.DataFrame(
[
{"goal": "Successfully configure experiment tracking"},
{"goal": "Debug a deployment error", "persona": "Senior engineer"},
{"goal": "Set up a CI/CD pipeline for ML"},
]
)
simulator = ConversationSimulator(test_cases=df)
从现有对话生成测试用例
使用 generate_test_cases.. 从现有会话生成测试用例。 这对于创建反映生产对话中实际用户行为的测试用例非常有用:
import mlflow
from mlflow.genai.simulators import generate_test_cases, ConversationSimulator
# Get existing sessions from your experiment
sessions = mlflow.search_sessions(
locations=["<experiment-id>"],
max_results=50,
)
# Generate test cases by extracting goals and personas from sessions
test_cases = generate_test_cases(sessions)
# Optionally, save generated test cases as a dataset for reproducibility
from mlflow.genai.datasets import create_dataset
dataset = create_dataset(name="generated_scenarios")
dataset.merge_records([{"inputs": tc} for tc in test_cases])
# Use generated test cases with the simulator
simulator = ConversationSimulator(test_cases=test_cases)
以 MLflow 数据集的形式跟踪测试用例
对于可重现的测试,请将测试用例保留为 MLflow 评估数据集:
from mlflow.genai.datasets import create_dataset, get_dataset
# Create and populate a dataset
dataset = create_dataset(name="conversation_test_cases")
dataset.merge_records(
[
{"inputs": {"goal": "Successfully configure experiment tracking"}},
{"inputs": {"goal": "Debug a deployment error", "persona": "Senior engineer"}},
]
)
# Use the dataset with the simulator
dataset = get_dataset(name="conversation_test_cases")
simulator = ConversationSimulator(test_cases=dataset)
代理函数接口
代理函数接收会话历史记录并返回响应。 支持两个参数名称:
-
input:对话历史记录作为消息字典列表(聊天完成响应格式) -
messages:等效的替代参数名称(聊天完成请求格式)
def predict_fn(input: list[dict], **kwargs) -> str:
"""
Args:
input: Conversation history as a list of message dicts.
Each message has "role" ("user" or "assistant") and "content".
Alternatively, use "messages" as the parameter name.
**kwargs: Additional arguments including:
- mlflow_session_id: Unique ID for this conversation session
- Any fields from your test case's "context"
Returns:
The assistant's response as a string.
"""
基本
from openai import OpenAI
client = OpenAI()
def predict_fn(input: list[dict], **kwargs):
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=input,
)
return response.choices[0].message.content
使用上下文
from openai import OpenAI
client = OpenAI()
def predict_fn(input: list[dict], **kwargs):
# user_id is passed from test case's "context" field
user_id = kwargs.get("user_id")
# Customize system prompt based on context
system_message = f"You are helping user {user_id}. Be helpful and concise."
messages = [{"role": "system", "content": system_message}] + input
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
)
return response.choices[0].message.content
有状态代理
对于跨轮次维护状态的代理,请使用 mlflow_session_id 管理会话状态:
# Simple in-memory state management for stateful agents
conversation_state = {} # Maps session_id -> conversation context
def predict_fn(input: list[dict], **kwargs):
session_id = kwargs.get("mlflow_session_id")
# Initialize or retrieve state for this session
if session_id not in conversation_state:
conversation_state[session_id] = {
"turn_count": 0,
"topics_discussed": [],
}
state = conversation_state[session_id]
state["turn_count"] += 1
# Your agent logic here - can use state for context
system_message = f"You are a helpful assistant. This is turn {state['turn_count']} of the conversation."
messages = [{"role": "system", "content": system_message}] + input
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
)
return response.choices[0].message.content
配置选项
ConversationSimulator 参数
| 参数 | 类型 | 默认 | 说明 |
|---|---|---|---|
test_cases |
list[dict]、DataFrame 或 EvaluationDataset |
必需 | 测试用例定义 |
max_turns |
int |
10 | 停止前的最大会话轮次数 |
user_model |
str |
(Databricks-hosted) | 模拟用户消息的模型 |
**user_llm_params |
dict |
{} |
用户模拟 LLM 的其他参数 |
模型选择
模拟器使用 LLM 生成真实的用户消息。 可以使用参数指定其他模型 user_model :
simulator = ConversationSimulator(
test_cases=test_cases,
user_model="anthropic:/claude-sonnet-4-20250514",
temperature=0.7, # Passed to the user simulation LLM
)
支持的模型格式遵循模式 "<provider>:/<model>"。 有关受支持提供程序的完整列表,请参阅 MLflow 文档 。
有关为聊天模拟提供支持的模型的信息
基于 LLM 的对话模拟可能会使用第三方服务来模拟用户交互,包括由 Microsoft 运行的 Azure OpenAI。
对于 Azure OpenAI,Databricks 已选择退出“滥用监视”,因此不会通过 Azure OpenAI 存储任何提示或响应。
对于欧盟(EU)工作区,对话模拟使用托管在欧盟的模型。 所有其他区域使用托管在美国的模型。
禁用合作伙伴支持的 AI 功能可防止对话模拟调用合作伙伴支持的模型。 你仍然可以通过提供自己的模型来使用对话模拟。
对话停止
满足以下任一条件时,对话将停止:
-
达到最大回合数:
max_turns已达限制 - 目标已实现:模拟器检测到用户的目标已完成
查看结果
模拟对话显示在具有特殊元数据的 MLflow UI 中:
-
会话 ID:每个会话都具有唯一的会话 ID(前缀为
sim-) - 模拟元数据:目标、角色和轮次编号存储在每个轨迹中
导航到试验中的 “会话 ”选项卡,查看按会话分组的对话。 选择一个会话以查看各个轮次及其评估。