从代理评估迁移到 MLflow 3

代理评估现已与 Databricks 上的 MLflow 3 集成。 代理评估 SDK 方法现在通过 mlflow[databricks]>=3.1 SDK 在 mlflow.genai 命名空间下公开。 MLflow 3 引入了:

  • 更新的用户界面反映了所有 SDK 功能
  • 新 SDKmlflow.genai 使用简化的 API 来运行评估、人工标记和管理评估数据集
  • 增强型跟踪功能:由生产级采集后端提供支持,实现实时可观测性
  • 简化人工反馈的收集
  • 改进了作为内置评分器的 LLM 判断

本指南可帮助你从代理评估(MLflow 2.x 和 databricks-agents<1.0)迁移到 MLflow 3。 此详细指南还以 快速参考 格式提供。

重要

具有代理评估的 MLflow 3 仅适用于托管 MLflow,而不适用于开源 MLflow。 查看 托管与开源 MLflow 页面 ,以更深入地了解托管和开源 MLflow 之间的差异。

迁移核对清单

使用此清单入门。 每个项目链接到以下各节中的详细信息。

评估 API

拥有法学硕士学位的法官

人工反馈

要避免的常见陷阱

  • 请记住在 DataFrames 中更新数据字段名称
  • 请记住, model_type="databricks-agent" 不再需要
  • 确保自定义评分器返回有效值(“是”/“否”用于通过/失败)
  • 使用 search_traces() 而不是直接访问结果表
  • 更新代码中的任何硬编码命名空间引用
  • 请记住显式指定所有评分员 - MLflow 3 不会自动运行评委
  • global_guidelines 从配置项转换为显式 Guidelines() 评分器

评估 API 迁移

导入更新

下面的列表汇总了要更新的导入,下面每个子部分中都有详细信息和示例。

# Old imports
from mlflow import evaluate
from databricks.agents.evals import metric
from databricks.agents.evals import judges

# New imports
from mlflow.genai import evaluate
from mlflow.genai.scorers import scorer
from mlflow.genai import judges
# For predefined scorers:
from mlflow.genai.scorers import (
    Correctness, Guidelines, ExpectationGuidelines,
    RelevanceToQuery, Safety, RetrievalGroundedness,
    RetrievalRelevance, RetrievalSufficiency
)

mlflow.evaluate()mlflow.genai.evaluate()

核心评估 API 已移动到具有更简洁参数名称的专用 GenAI 命名空间。

密钥 API 更改:

MLflow 2.x MLflow 3.x 注释
mlflow.evaluate() mlflow.genai.evaluate() 新命名空间
model 参数 predict_fn 参数 更具描述性的名称
model_type="databricks-agent" 不需要 自动检测
extra_metrics=[...] scorers=[...] 更清晰的术语
evaluator_config={...} 不需要 评分器的一部分

数据字段映射:

MLflow 2.x 字段 MLflow 3.x 字段 Description
request inputs 代理输入
response outputs 代理输出
expected_response expectations 基本事实
retrieved_context 通过跟踪访问 来自跟踪的上下文
guidelines 评分器配置的一部分 移动到评分器级别

示例:基本评估

MLflow 2.x:

import mlflow
import pandas as pd

eval_data = [
        {
            "request":  "What is MLflow?",
            "response": "MLflow is an open-source platform for managing ML lifecycle.",
            "expected_response": "MLflow is an open-source platform for managing ML lifecycle.",
        },
        {
            "request":  "What is Databricks?",
            "response": "Databricks is a unified analytics platform.",
            "expected_response": "Databricks is a unified analytics platform for big data and AI.",
        },
    ]

# Note: By default, MLflow 2.x runs all applicable judges automatically
results = mlflow.evaluate(
    data=eval_data,
    model=my_agent,
    model_type="databricks-agent",
    evaluator_config={
        "databricks-agent": {
            # Optional: limit to specific judges
            # "metrics": ["correctness", "safety"],
            # Optional: add global guidelines
            "global_guidelines": {
                "clarity": ["Response must be clear and concise"]
            }
        }
    }
)

# Access results
eval_df = results.tables['eval_results']

MLflow 3.x:

import mlflow
import pandas as pd
from mlflow.genai.scorers import Guidelines

eval_data = [
        {
            "inputs": {"request": "What is MLflow?"},
            "outputs": {
                "response": "MLflow is an open-source platform for managing ML lifecycle."
            },
            "expectations": {
                "expected_response":
                    "MLflow is an open-source platform for managing ML lifecycle.",

            },
        },
        {
            "inputs": {"request": "What is Databricks?"},
            "outputs": {"response": "Databricks is a unified analytics platform."},
            "expectations": {
                "expected_response":
                    "Databricks is a unified analytics platform for big data and AI.",

            },
        },
    ]

# Define guidelines for scorer
guidelines = {
    "clarity": ["Response must be clear and concise"],
    # supports str or list[str]
    "accuracy": "Response must be factually accurate",
}

print("Running evaluation with mlflow.genai.evaluate()...")

with mlflow.start_run(run_name="basic_evaluation_test") as run:
    # Run evaluation with new API
    # Note: Must explicitly specify which scorers to run (no automatic selection)
    results = mlflow.genai.evaluate(
        data=eval_data,
        scorers=[
            Correctness(),  # Requires expectations.expected_response
            RelevanceToQuery(),  # No ground truth needed
            Guidelines(name="clarity", guidelines=guidelines["clarity"]),
            Guidelines(name="accuracy", guidelines=guidelines["accuracy"]),
            # ExpectationsGuidelines(),
            # Add more scorers as needed: Safety(), RetrievalGroundedness(), etc.
        ],
    )

# Access results using search_traces
traces = mlflow.search_traces(
        run_id=results.run_id,
)

访问评估结果

在 MLflow 3 中,评估结果以附带评估信息的跟踪进行存储。 使用mlflow.search_traces()访问详细结果。

# Access results using search_traces
traces = mlflow.search_traces(
    run_id=results.run_id,
)

# Access assessments for each trace
for trace in traces:
    assessments = trace.info.assessments
    for assessment in assessments:
        print(f"Scorer: {assessment.name}")
        print(f"Value: {assessment.value}")
        print(f"Rationale: {assessment.rationale}")

评估 MLflow LoggedModel

在 MLflow 2.x 中,可以将记录的 MLflow 模型(例如 PyFunc 模型或代理框架记录的模型)直接传递给 mlflow.evaluate()。 在 MLflow 3.x 中,需要在预测函数中包装模型来处理参数映射。

之所以需要这个包装器,是因为 mlflow.genai.evaluate() 要求提供一个 predict 函数,该函数能以关键字参数的形式接收数据集中 inputs 字典的各个键;然而,大多数被记录的模型却只接受单一输入参数(例如 PyFunc 模型为 model_inputs,LangChain 模型为类似的接口)。

预测函数充当评估框架的命名参数和模型预期输入格式之间的转换层。

import mlflow
from mlflow.genai.scorers import Safety

# Make sure to load your logged model outside of the predict_fn so MLflow only loads it once!
model = mlflow.pyfunc.load_model("models:/chatbot/staging")

def evaluate_model(question: str) -> dict:
    return model.predict({"question": question})

results = mlflow.genai.evaluate(
    data=[{"inputs": {"question": "Tell me about MLflow"}}],
    predict_fn=evaluate_model,
    scorers=[Safety()]
)

用于评分器迁移的自定义指标

自定义评估函数 (@metric) 现在使用 @scorer 修饰器和简化的签名。

关键更改:

MLflow 2.x MLflow 3.x 注释
@metric 修饰器 @scorer 修饰器 新名称
def my_metric(request, response, ...) def my_scorer(inputs, outputs, expectations, traces) 简化
多个 expected_* 参数 单个字典类型的 expectations 参数 合并
custom_expected 字典 expectations 的一部分 简化
request 参数 inputs 参数 一致的命名
response 参数 outputs 参数 一致的命名

示例:通过/未通过评分器

MLflow 2.x:

from databricks.agents.evals import metric

@metric
def response_length_check(request, response, expected_response=None):
    """Check if response is within acceptable length."""
    length = len(response)
    return "yes" if 50 <= length <= 500 else "no"

# Use in evaluation
results = mlflow.evaluate(
    data=eval_data,
    model=my_agent,
    model_type="databricks-agent",
    extra_metrics=[response_length_check]
)

MLflow 3.x:

import mlflow
from mlflow.genai.scorers import scorer

# Sample agent function
@mlflow.trace
def my_agent(request: str):
    """Simple mock agent for testing - MLflow 3 expects dict input"""
    responses = {
        "What is MLflow?": "MLflow is an open-source platform for managing ML lifecycle.",
        "What is Databricks?": "Databricks is a unified analytics platform.",
    }
    return {"response": responses.get(request, "I don't have information about that.")}

@scorer
def response_length_check(inputs, outputs, expectations=None, traces=None):
    """Check if response is within acceptable length."""
    length = len(outputs)
    return "yes" if 50 <= length <= 500 else "no"

# Use in evaluation
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[response_length_check]
)

示例:带有考核的数字评分器

MLflow 2.x:

from databricks.agents.evals import metric, Assessment

def calculate_similarity(response, expected_response):
    return 1

@metric
def semantic_similarity(response, expected_response):
    """Calculate semantic similarity score."""
    # Your similarity logic here
    score = calculate_similarity(response, expected_response)

    return Assessment(
        name="semantic_similarity",
        value=score,
        rationale=f"Similarity score based on embedding distance: {score:.2f}"
    )

MLflow 3.x:

from mlflow.genai.scorers import scorer
from mlflow.entities import Feedback

@scorer
def semantic_similarity(outputs, expectations):
    """Calculate semantic similarity score."""
    # Your similarity logic here
    expected = expectations.get("expected_response", "")
    score = calculate_similarity(outputs, expected)

    return Feedback(
        name="semantic_similarity",
        value=score,
        rationale=f"Similarity score based on embedding distance: {score:.2f}"
    )

LLM 法官迁移

法官行为的主要差异

自动法官选择:

MLflow 2.x MLflow 3.x
根据数据自动运行所有适用的判断 必须显式指定要使用的评分器
使用 evaluator_config 来限制法官 scorers 参数中传递所需的评分器
配置中的 global_guidelines 使用 Guidelines() 评分器
根据可用数据字段选择的判断 你可以精确掌控要运行的评分器

MLflow 2.x 自动判断选择:

  • 没有基本事实:运行 chunk_relevancegroundednessrelevance_to_querysafetyguideline_adherence
  • 有基本事实:还运行 context_sufficiencycorrectness

MLflow 3.x 显式评分器选择:

  • 必须显式列出要运行的评分器
  • 更多控制,但需要明确评估需求

迁移路径

用例 MLflow 2.x 推荐使用 MLflow 3.x 版本
基本正确性检查 judges.correctness() 中的 @metric Correctness() 评分器或 judges.is_correct() 判断
安全评估 judges.safety() 中的 @metric Safety() 评分器或 judges.is_safe() 判断
全局准则 judges.guideline_adherence() Guidelines() 评分器或 judges.meets_guidelines() 判断
Per-eval-set-row 指南 带 expected_* 的 judges.guideline_adherence() ExpectationGuidelines() 评分器或 judges.meets_guidelines() 判断
检查事实是否有支持 judges.groundedness() judges.is_grounded()RetrievalGroundedness() 评分器
检查上下文的相关性 judges.relevance_to_query() judges.is_context_relevant()RelevanceToQuery() 评分器
检查上下文区块的相关性 judges.chunk_relevance() judges.is_context_relevant()RetrievalRelevance() 评分器
检查上下文的完整性 judges.context_sufficiency() judges.is_context_sufficient()RetrievalSufficiency() 评分器
复杂的自定义逻辑 @metric 中的直接判断调用 预定义评分器或带判断调用的 @scorer

MLflow 3 提供了两种方法来使用 LLM 法官:

  1. 预定义的评分器 - 即用评分器,用于包装判断,并进行自动跟踪解析

  2. 直接法官呼叫 - 直接在自定义记分器内呼叫法官,以获得更多控制

控制要运行哪些判断

示例:指定要运行的判断

MLflow 2.x(限制默认法官):

import mlflow

# By default, runs all applicable judges
# Use evaluator_config to limit which judges run
results = mlflow.evaluate(
    data=eval_data,
    model=my_agent,
    model_type="databricks-agent",
    evaluator_config={
        "databricks-agent": {
            # Only run these specific judges
            "metrics": ["groundedness", "relevance_to_query", "safety"]
        }
    }
)

MLflow 3.x(显式评分器选择):

from mlflow.genai.scorers import (
    RetrievalGroundedness,
    RelevanceToQuery,
    Safety
)

# Must explicitly specify which scorers to run
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[
        RetrievalGroundedness(),
        RelevanceToQuery(),
        Safety()
    ]
)

综合迁移示例

此示例演示如何迁移一个评估,该评估使用多个法官和自定义配置:

MLflow 2.x:

from databricks.agents.evals import judges, metric
import mlflow

# Custom metric using judge
@metric
def check_no_pii(request, response, retrieved_context):
    """Check if retrieved context contains PII."""
    context_text = '\n'.join([c['content'] for c in retrieved_context])

    return judges.guideline_adherence(
        request=request,
        guidelines=["The context must not contain personally identifiable information."],
        guidelines_context={"retrieved_context": context_text}
    )

# Define global guidelines
global_guidelines = {
    "tone": ["Response must be professional and courteous"],
    "format": ["Response must use bullet points for lists"]
}

# Run evaluation with multiple judges
results = mlflow.evaluate(
    data=eval_data,
    model=my_agent,
    model_type="databricks-agent",
    evaluator_config={
        "databricks-agent": {
            # Specify subset of built-in judges
            "metrics": ["correctness", "groundedness", "safety"],
            # Add global guidelines
            "global_guidelines": global_guidelines
        }
    },
    # Add custom judge
    extra_metrics=[check_no_pii]
)

MLflow 3.x:

from mlflow.genai.scorers import (
    Correctness,
    RetrievalGroundedness,
    Safety,
    Guidelines,
    scorer
)
from mlflow.genai import judges
import mlflow

# Custom scorer using judge
@scorer
def check_no_pii(inputs, outputs, traces):
    """Check if retrieved context contains PII."""
    # Extract retrieved context from trace
    retrieved_context = traces.data.spans[0].attributes.get("retrieved_context", [])
    context_text = '\n'.join([c['content'] for c in retrieved_context])

    return judges.meets_guidelines(
        name="no_pii",
        context={
            "request": inputs,
            "retrieved_context": context_text
        },
        guidelines=["The context must not contain personally identifiable information."]
    )

# Run evaluation with explicit scorers
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[
        # Built-in scorers (explicitly specified)
        Correctness(),
        RetrievalGroundedness(),
        Safety(),
        # Global guidelines as scorers
        Guidelines(name="tone", guidelines="Response must be professional and courteous"),
        Guidelines(name="format", guidelines="Response must use bullet points for lists"),
        # Custom scorer
        check_no_pii
    ]
)

迁移到预定义的法官评分器

MLflow 3 提供预定义的评分器,用于包装 LLM 判断,使它们更易与 mlflow.genai.evaluate() 搭配使用。

示例:正确性判断

MLflow 2.x:

from databricks.agents.evals import judges, metric

@metric
def check_correctness(request, response, expected_response):
    """Check if response is correct."""
    return judges.correctness(
        request=request,
        response=response,
        expected_response=expected_response
    )

# Use in evaluation
results = mlflow.evaluate(
    data=eval_data,
    model=my_agent,
    model_type="databricks-agent",
    extra_metrics=[check_correctness]
)

MLflow 3.x(选项 1:使用预定义的评分器):

from mlflow.genai.scorers import Correctness

# Use predefined scorer directly
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[Correctness()]
)

MLflow 3.x(选项 2:具有法官的自定义评分器):

from mlflow.genai.scorers import scorer
from mlflow.genai import judges

@scorer
def check_correctness(inputs, outputs, expectations):
    """Check if response is correct."""
    return judges.correctness(
        request=inputs,
        response=outputs,
        expected_response=expectations.get("expected_response", "")
    )

# Use in evaluation
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[check_correctness]
)

示例:安全判断

MLflow 2.x:

from databricks.agents.evals import judges, metric

@metric
def check_safety(request, response):
    """Check if response is safe."""
    return judges.safety(
        request=request,
        response=response
    )

MLflow 3.x:

from mlflow.genai.scorers import Safety

# Use predefined scorer
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[Safety()]
)

示例:相关性判断

MLflow 2.x:

from databricks.agents.evals import judges, metric

@metric
def check_relevance(request, response):
    """Check if response is relevant to query."""
    return judges.relevance_to_query(
        request=request,
        response=response
    )

MLflow 3.x:

from mlflow.genai.scorers import RelevanceToQuery

# Use predefined scorer
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[RelevanceToQuery()]
)

示例:有据性判断

MLflow 2.x:

from databricks.agents.evals import judges, metric

@metric
def check_groundedness(response, retrieved_context):
    """Check if response is grounded in context."""
    context_text = '\n'.join([c['content'] for c in retrieved_context])
    return judges.groundedness(
        response=response,
        context=context_text
    )

MLflow 3.x:

from mlflow.genai.scorers import RetrievalGroundedness

# Use predefined scorer (automatically extracts context from trace)
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[RetrievalGroundedness()]
)

迁移准则遵循meets_guidelines

判定器 guideline_adherence 已改名为 meets_guidelines,并具有更加简洁的 API。

MLflow 2.x:

from databricks.agents.evals import judges, metric

@metric
def check_tone(request, response):
    """Check if response follows tone guidelines."""
    return judges.guideline_adherence(
        request=request,
        response=response,
        guidelines=["The response must be professional and courteous."]
    )

@metric
def check_policies(request, response, retrieved_context):
    """Check if response follows company policies."""
    context_text = '\n'.join([c['content'] for c in retrieved_context])

    return judges.guideline_adherence(
        request=request,
        guidelines=["Response must comply with return policy in context."],
        guidelines_context={
            "response": response,
            "retrieved_context": context_text
        }
    )

MLflow 3.x(选项 1:使用预定义的准则评分器):

from mlflow.genai.scorers import Guidelines

# For simple guidelines that only need request/response
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[
        Guidelines(
            name="tone",
            guidelines="The response must be professional and courteous."
        )
    ]
)

MLflow 3.x(选项 2:具有meets_guidelines的自定义记分器):

from mlflow.genai.scorers import scorer
from mlflow.genai import judges

@scorer
def check_policies(inputs, outputs, traces):
    """Check if response follows company policies."""
    # Extract retrieved context from trace
    retrieved_context = traces.data.spans[0].attributes.get("retrieved_context", [])
    context_text = '\n'.join([c['content'] for c in retrieved_context])

    return judges.meets_guidelines(
        name="policy_compliance",
        guidelines="Response must comply with return policy in context.",
        context={
            "request": inputs,
            "response": outputs,
            "retrieved_context": context_text
        }
    )

示例:迁移 ExpectationGuidelines

如果要为评估集中的每个示例设置指南(例如需要涵盖某些主题),或者响应遵循特定样式,请在 MLflow 3.x 中使用 ExpectationGuidelines 记分器。

MLflow 2.x:

在 MLflow 2.x 中,将实施如下指南:

import pandas as pd

eval_data = {
    "request": "What is MLflow?",
    "response": "MLflow is an open-source platform for managing ML lifecycle.",
    "guidelines": [
        ["The response must mention these topics: platform, observability, testing"]
    ],
}

eval_df = pd.DataFrame(eval_data)

mlflow.evaluate(
    data=eval_df,
    model_type="databricks-agent",
    evaluator_config={
        "databricks-agent": {"metrics": ["guideline_adherence"]}
    }
)

MLflow 3.x:

在 MLflow 3.x 中,你以不同的方式组织评估数据。 评估数据中的每个条目都应有一个 expectations 键,在该项中,可以包含类似 guidelines字段的字段。

以下是你的评估数据可能呈现的样子:

eval_data = [
    {
        "inputs": {"input": "What is MLflow?"},
        "outputs": {"response": "MLflow is an open-source platform for managing ML lifecycle."},
        "expectations": {
            "guidelines": [
                "The response should mention the topics: platform, observability, and testing."
            ]
        }
    }
]

然后,使用 ExpectationGuidelines 记分器:

import mlflow
from mlflow.genai.scorers import ExpectationsGuidelines

expectations_guideline = ExpectationsGuidelines()

# Use predefined scorer
results = mlflow.genai.evaluate(
    data=eval_data,  # Make sure each row has expectations.guidelines
    predict_fn=my_app,
    scorers=[
        expectations_guideline
    ]
)

小窍门

如果需要检查特定事实内容(例如“MLflow 是开放源代码”),请使用具有字段而不是指南的正确性评分器 expected_facts 。 请参阅 正确性判断和评分器

复制 MLflow 2.x 自动判断行为

若要复制 MLflow 2.x 执行所有适用判断的行为,请显式包括所有评分器:

MLflow 2.x (自动):

# Automatically runs all applicable judges based on data
results = mlflow.evaluate(
    data=eval_data,  # Contains expected_response and retrieved_context
    model=my_agent,
    model_type="databricks-agent"
)

MLflow 3.x(显式):

from mlflow.genai.scorers import (
    Correctness, RetrievalSufficiency,  # Require ground truth
    RelevanceToQuery, Safety, RetrievalGroundedness, RetrievalRelevance  # No ground truth
)

# Manually specify all judges you want to run
results = mlflow.genai.evaluate(
    data=eval_data,
    predict_fn=my_agent,
    scorers=[
        # With ground truth judges
        Correctness(),
        RetrievalSufficiency(),
        # Without ground truth judges
        RelevanceToQuery(),
        Safety(),
        RetrievalGroundedness(),
        RetrievalRelevance(),
    ]
)

直接判断使用情况

你仍然可以直接致电评委进行测试:

from mlflow.genai import judges

# Test a judge directly (same in both versions)
result = judges.correctness(
    request="What is MLflow?",
    response="MLflow is an open-source platform for ML lifecycle.",
    expected_response="MLflow is an open-source platform for managing the ML lifecycle."
)
print(f"Judge result: {result.value}")
print(f"Rationale: {result.rationale}")

人工反馈迁移

标记会话和架构

“审阅应用”功能已从 databricks.agents 移动到 mlflow.genai.labeling

命名空间更改:

MLflow 2.x MLflow 3.x
databricks.agents.review_app mlflow.genai.labeling
databricks.agents.datasets mlflow.genai.datasets
review_app.label_schemas.* mlflow.genai.label_schemas.*
app.create_labeling_session() labeling.create_labeling_session()

示例:创建标记会话

MLflow 2.x:

from databricks.agents import review_app
import mlflow

# Get review app

my_app = review_app.get_review_app()

# Create custom label schema
quality_schema = my_app.create_label_schema(
    name="response_quality",
    type="feedback",
    title="Rate the response quality",
    input=review_app.label_schemas.InputCategorical(
        options=["Poor", "Fair", "Good", "Excellent"]
    )
)

# Create labeling session
session = my_app.create_labeling_session(
    name="quality_review_jan_2024",
    agent="my_agent",
    assigned_users=["user1@company.com", "user2@company.com"],
    label_schemas=[
        review_app.label_schemas.EXPECTED_FACTS,
        "response_quality"
    ]
)

# Add traces for labeling
traces = mlflow.search_traces(run_id=run_id)
session.add_traces(traces)

MLflow 3.x:

import mlflow
import mlflow.genai.labeling as labeling
import mlflow.genai.label_schemas as schemas

# Create custom label schema
quality_schema = schemas.create_label_schema(
    name="response_quality",
    type=schemas.LabelSchemaType.FEEDBACK,
    title="Rate the response quality",
    input=schemas.InputCategorical(
        options=["Poor", "Fair", "Good", "Excellent"]
    ),
    overwrite=True
)

# Previously built in schemas must be created before use
# However, constant for their names are provided to ensure your schemas work with built-in scorers
expected_facts_schema = schemas.create_label_schema(
    name=schemas.EXPECTED_FACTS,
    type=schemas.LabelSchemaType.EXPECTATION,
    title="Expected facts",
    input=schemas.InputTextList(max_length_each=1000),
    instruction="Please provide a list of facts that you expect to see in a correct response.",
    overwrite=True
)

# Create labeling session
session = labeling.create_labeling_session(
    name="quality_review_jan_2024",
    assigned_users=["user1@company.com", "user2@company.com"],
    label_schemas=[
        schemas.EXPECTED_FACTS,
        "response_quality"
    ]
)

# Add traces for labeling
traces = mlflow.search_traces(
    run_id=session.mlflow_run_id
)
session.add_traces(traces)

# Get review app URL
app = labeling.get_review_app()
print(f"Review app URL: {app.url}")

将反馈同步到数据集

MLflow 2.x:

# Sync expectations back to dataset
session.sync(to_dataset="catalog.schema.eval_dataset")

# Use dataset for evaluation
dataset = spark.read.table("catalog.schema.eval_dataset")
results = mlflow.evaluate(
    data=dataset,
    model=my_agent,
    model_type="databricks-agent"
)

MLflow 3.x:

from mlflow.genai import datasets
import mlflow

# Sample agent function
@mlflow.trace
def my_agent(request: str):
    """Simple mock agent for testing - MLflow 3 expects dict input"""
    responses = {
        "What is MLflow?": "MLflow is an open-source platform for managing ML lifecycle.",
        "What is Databricks?": "Databricks is a unified analytics platform.",
    }
    return {"response": responses.get(request, "I don't have information about that.")}

# Sync expectations back to dataset
session.sync(to_dataset="catalog.schema.eval_dataset")

# Use dataset for evaluation
dataset = datasets.get_dataset("catalog.schema.eval_dataset")
results = mlflow.genai.evaluate(
    data=dataset,
    predict_fn=my_agent
)

其他资源

有关迁移期间的其他支持,请参阅 MLflow 文档或联系 Databricks 支持团队。