在 LUIS 门户中测试 LUIS 应用Test your LUIS app in the LUIS portal

对应用进行测试是一个迭代过程。Testing an app is an iterative process. 训练 LUIS 应用后,采用示例陈述来对应用进行测试,查看应用是否能准确地识别意向和实体。After training your LUIS app, test it with sample utterances to see if the intents and entities are recognized correctly. 如果未能准确识别,请对 LUIS 应用进行更新和训练,然后再次测试。If they're not, make updates to the LUIS app, train, and test again.

测试前训练Train before testing

  1. 登录到 LUIS 门户,选择“订阅”和“创作资源”以查看分配给该创作资源的应用。Sign in to the LUIS portal, and select your Subscription and Authoring resource to see the apps assigned to that authoring resource.
  2. 在“我的应用”页上选择应用名称以打开应用。Open your app by selecting its name on My Apps page.
  3. 为针对最新版的活动应用进行测试,请在测试之前从顶部菜单中选择“训练”。In order to test against the most recent version of the active app, select Train from the top menu, before testing.

测试陈述Test an utterance

测试言语不应与应用中的任何示例言语完全相同。The test utterance should not be exactly the same as any example utterances in the app. 测试言语应包括预期用户使用的选词、短语长度和实体用法。The test utterance should include word choice, phrase length, and entity usage you expect for a user.

  1. 登录到 LUIS 门户,选择“订阅”和“创作资源”以查看分配给该创作资源的应用。Sign in to the LUIS portal, and select your Subscription and Authoring resource to see the apps assigned to that authoring resource.

  2. 在“我的应用”页上选择应用名称以打开应用。Open your app by selecting its name on My Apps page.

  3. 若要访问“测试”滑出面板,请在应用程序的顶部面板中选择“测试” 。To access the Test slide-out panel, select Test in your application's top panel.

    训练和测试应用页Train & Test App page

  4. 在文本框中输入陈述,然后按 Enter。Enter an utterance in the text box and select Enter. 虽然在“测试”中可键入任意数量的测试陈述,但一次只能键入一个。You can type as many test utterances as you want in the Test, but only one utterance at a time.

  5. 陈述的最高意向和分数会添加至文本框下方的陈述列表。The utterance, its top intent, and score are added to the list of utterances under the text box.

    交互式测试识别错误意向Interactive testing identifies the wrong intent

检查预测Inspect the prediction

在“检查”面板中检查测试结果的详细信息。You inspect details of the test result in the Inspect panel.

  1. 打开“测试”滑出面板后,对想要比对的陈述选择“检查” 。With the Test slide-out panel open, select Inspect for an utterance you want to compare.

    选择“检查”按钮可查看有关测试结果的更多详细信息Select Inspect button to see more details about the test results

  2. 此时将显示“检查”面板。The Inspection panel appears. 此面板包括评分最高的意向以及任何已识别的实体。The panel includes the top scoring intent as well as any identified entities. 此面板显示所选言语的预测。The panel shows the prediction of the selected utterance.

    “测试检查”面板的部分屏幕截图Partial screenshot of Test Inspect panel

添加到示例言语Add to example utterances

从检查面板中,可以通过选择“添加到示例言语”将测试言语添加到意图。From the inspection panel, you can add the test utterance to an intent by selecting Add to example utterances.

禁用所需的功能Disable required features

此切换按钮可帮助你确定经过训练的应用是否根据所需功能正确预测你的实体。This toggle helps you determine if the trained app is correctly predicting your entities based on required features. 默认设置是在预测过程中按需应用该功能。The default setting is to apply the feature as required during prediction. 选择此切换按钮可查看不需要该子实体功能时的预测结果。Select this toggle to see what the prediction would be if the subentity�s feature was not required.

何时禁用所需的功能When to disable required features

经过训练的应用可能会基于以下某种情况对机器学习实体做出错误预测:The trained app may mispredict a machine-learned entity based on one of the following:

  • 示例言语的标记错误。Incorrect labeling of example utterances.
  • 所需的功能与文本不匹配。The required feature doesn't match the text.

例如,一个机器学习实体具有一个子实体,该子实体是某个人的姓名。An example is a machine-learned entity with a subentity of a person's name.

具有所需功能的 LUIS 门户机器学习实体架构的屏幕截图

此机器学习实体的示例言语为:Assign Bob Jones to work on the new security featureAn example utterance for this machine-learned entity is: Assign Bob Jones to work on the new security feature.

提取应是 security feature 作为票证说明,Bob Jones 作为工程师,它们是 Assign ticket 实体的两个子实体。The extraction should be security feature as the ticket description and Bob Jones as the engineer, two subentities of Assign ticket entity.

为了帮助子实体成功预测,请将预生成实体 PersonName 作为功能添加到 engineer 子实体。In order to help the subentity successfully predict, add the prebuilt entity PersonName aa a feature to the engineer subentity. 如果你将该功能设为必需,这意味着只有在为文本预测 PersonName 预生成实体时,才会提取子实体。If you make the feature required, that means the subentity will only be extracted if the PersonName prebuilt entity is predicted for the text. 这意味着文本中未使用 PersonName 子实体预测的任何姓名都不会作为标记的子实体 engineer 返回。This means that any name in the text that doesn't predict with PersonName subentity, will not be returned as a labeled subentity, engineer.

使用交互式测试窗格,并看到具有所需功能的子实体未进行预测时,请切换此设置,查看是否可以在不需要该功能的情况下预测子实体。When you use the interactive test pane, and see a subentity, with a required feature, isn't predicting, toggle this setting, to see if the subentity would be predicted without the feature being required. 由于对示例言语进行了正确标记,可能能够在没有所需功能的情况下正确预测子实体。The subentity may be able to be correctly predicted without the feature as required due to correct labeling of example utterances.

查看情绪结果View sentiment results

如果在发布页面上配置了“情绪分析”,则测试结果会包括在该陈述中发现的情绪 。If Sentiment analysis is configured on the Publish page, the test results include the sentiment found in the utterance.

更正匹配的模式的意向Correct matched pattern's intent

如果使用模式并且该陈述与某个模式匹配,但是意向预测错误,请选择该模式旁边的“编辑”链接,然后选择正确的意向。If you are using Patterns and the utterance matched a pattern, but the wrong intent was predicted, select the Edit link by the pattern, then select the correct intent.

与已发布的版本进行比较Compare with published version

可以使用已发布的终结点版本测试应用的活动版本。You can test the active version of your app with the published endpoint version. 在“检查”面板中选择“与已发布版本进行比较” 。In the Inspect panel, select Compare with published. 针对该发布模型的任何测试都会从 Azure 订阅配额余量中扣除。Any testing against the published model is deducted from your Azure subscription quota balance.

与已发布版本进行比较Compare with published

在测试面板中查看终结点 JSONView endpoint JSON in test panel

选择“显示 JSON 视图”,可以查看该比较返回的终结点 JSON。You can view the endpoint JSON returned for the comparison by selecting the Show JSON view.

已发布的 JSON 响应Published JSON response

测试面板中的其他设置Additional settings in test panel

LUIS 终结点LUIS endpoint

如果有多个 LUIS 终结点,请使用测试的“已发布”窗格上的“其他设置”链接来更改用于测试的终结点。If you have several LUIS endpoints, use the Additional Settings link on the Test's Published pane to change the endpoint used for testing. 如果不确定要使用哪个终结点,请选择默认的“Starter_Key”。If you are not sure which endpoint to use, select the default Starter_Key.

突出显示了“其他设置”链接的测试面板Test panel with Additional Settings link highlighted

批处理测试Batch testing

请参阅批处理测试概念并了解陈述批量测试的方法See batch testing concepts and learn how to test a batch of utterances.

后续步骤Next steps

如果测试表明 LUIS 应用未正确识别意向和实体,则可以通过标记更多陈述或添加功能来提高 LUIS 应用的准确性。If testing indicates that your LUIS app doesn't recognize the correct intents and entities, you can work to improve your LUIS app's accuracy by labeling more utterances or adding features.