如何使用适用于 C# 的语音 SDK 从语音中识别意向How to recognize intents from speech using the Speech SDK for C#

认知服务语音 SDK语言理解服务 (LUIS) 相集成,以提供意向识别The Cognitive Services Speech SDK integrates with the Language Understanding service (LUIS) to provide intent recognition. 意向是用户想要做的某件事:预订航班、查看天气预报或拨打电话。An intent is something the user wants to do: book a flight, check the weather, or make a call. 用户可以使用任何普通字词。The user can use whatever terms feel natural. LUIS 使用机器学习将用户请求映射到定义的意向。Using machine learning, LUIS maps user requests to the intents you've defined.

备注

LUIS 应用程序定义所要识别的意向和实体。A LUIS application defines the intents and entities you want to recognize. 它与使用语音服务的 C# 应用程序不同。It's separate from the C# application that uses the Speech service. 在本文中,“应用”是指 LUIS 应用,“应用程序”是指 C# 代码。In this article, "app" means the LUIS app, while "application" means the C# code.

在本指南中,我们将使用语音 SDK 开发一个 C# 控制台应用程序,用于通过设备的麦克风从用户话语中派生意图。In this guide, you use the Speech SDK to develop a C# console application that derives intents from user utterances through your device's microphone. 将了解如何执行以下操作:You'll learn how to:

  • 创建引用语音 SDK NuGet 包的 Visual Studio 项目Create a Visual Studio project referencing the Speech SDK NuGet package
  • 创建语音配置并获取意向识别器Create a speech configuration and get an intent recognizer
  • 获取 LUIS 应用的模型并添加所需的意向Get the model for your LUIS app and add the intents you need
  • 指定用于语音识别的语言Specify the language for speech recognition
  • 从文件中识别语音Recognize speech from a file
  • 使用异步的事件驱动的连续识别Use asynchronous, event-driven continuous recognition

先决条件Prerequisites

在开始阅读本指南之前,请务必准备好以下各项:Be sure you have the following items before you begin this guide:

LUIS 和语音LUIS and speech

LUIS 与语音服务集成,可从语音中识别意向。LUIS integrates with the Speech service to recognize intents from speech. 不需要语音服务订阅,只需要 LUIS。You don't need a Speech service subscription, just LUIS.

LUIS 使用三种密钥:LUIS uses three kinds of keys:

密钥类型Key type 目的Purpose
创作Authoring 用于以编程方式创建和修改 LUIS 应用Lets you create and modify LUIS apps programmatically
初学者Starter 仅允许使用纯文本测试 LUIS 应用程序Lets you test your LUIS application using text only
端点Endpoint 授权访问特定的 LUIS 应用Authorizes access to a particular LUIS app

对于本指南,需要使用终结点密钥类型。For this guide, you need the endpoint key type. 本指南使用一个示例家庭自动化 LUIS 应用,可以遵循使用预生成的家庭自动化应用快速入门来创建该应用。This guide uses the example Home Automation LUIS app, which you can create by following the Use prebuilt Home automation app quickstart. 如果你已创建自己的 LUIS 应用,可以改用该应用。If you've created a LUIS app of your own, you can use it instead.

当你创建 LUIS 应用时,LUIS 会自动生成一个初学者密钥,让你使用文本查询测试该应用。When you create a LUIS app, LUIS automatically generates a starter key so you can test the app using text queries. 此密钥不会启用语音服务集成,因此不适用于本指南。This key doesn't enable the Speech service integration and won't work with this guide. 在 Azure 仪表板中创建 LUIS 资源并将其分配给 LUIS 应用。Create a LUIS resource in the Azure dashboard and assign it to the LUIS app. 在本指南中,可以使用免费订阅层。You can use the free subscription tier for this guide.

在 Azure 仪表板中创建 LUIS 资源之后,请登录到 LUIS 门户,在“我的应用”页上选择自己的应用程序,然后切换到应用的“管理”页。 After you create the LUIS resource in the Azure dashboard, log into the LUIS portal, choose your application on the My Apps page, then switch to the app's Manage page. 最后,在侧栏中选择“密钥和终结点”。Finally, select Keys and Endpoints in the sidebar.

LUIS 门户密钥和终结点设置

在“密钥和终结点”设置页上:On the Keys and Endpoint settings page:

  1. 向下滚动到“资源和密钥”部分,选择“分配资源”。 Scroll down to the Resources and Keys section and select Assign resource.

  2. 在“将密钥分配到应用”对话框中进行以下更改:In the Assign a key to your app dialog box, make the following changes:

    • 在“租户”下选择“Microsoft”。 Under Tenant, choose Microsoft.
    • 在“订阅名称”下,选择包含所要使用的 LUIS 资源的 Azure 订阅。Under Subscription Name, choose the Azure subscription that contains the LUIS resource you want to use.
    • 在“密钥”下,选择要在应用中使用的 LUIS 资源。Under Key, choose the LUIS resource that you want to use with the app.

    片刻之后,新订阅将显示在页面底部的表格中。In a moment, the new subscription appears in the table at the bottom of the page.

  3. 选择密钥旁边的图标将其复制到剪贴板。Select the icon next to a key to copy it to the clipboard. (可以使用其中的任一密钥。)(You may use either key.)

LUIS 应用订阅密钥

在 Visual Studio 中创建语音项目Create a speech project in Visual Studio

若要创建 Visual Studio 项目用于 Windows 开发,需要创建项目,安装用于 .NET 桌面开发的 Visual Studio,安装语音 SDK,然后选择目标体系结构。To create a Visual Studio project for Windows development, you need to create the project, set up Visual Studio for .NET desktop development, install the Speech SDK, and choose the target architecture.

创建项目并添加工作负荷Create the project and add the workload

若要开始,请在 Visual Studio 中创建项目,并确保为 .NET 桌面开发安装了 Visual Studio:To start, create the project in Visual Studio, and make sure that Visual Studio is set up for .NET desktop development:

  1. 打开 Visual Studio 2019。Open Visual Studio 2019.

  2. 在“开始”窗口中,选择“创建新项目” 。In the Start window, select Create a new project.

  3. 在“创建新项目”窗口中,选择“控制台应用(.NET Framework)”,然后选择“下一步”。 In the Create a new project window, choose Console App (.NET Framework), and then select Next.

  4. 在“配置新项目”窗口中的“项目名称”内输入 helloworld,在“位置”中选择或创建目录路径,然后选择“创建”。 In the Configure your new project window, enter helloworld in Project name, choose or create the directory path in Location, and then select Create.

  5. 在 Visual Studio 菜单栏中,选择“工具” > “获取工具和功能”打开 Visual Studio 安装程序并显示“修改”对话框。 From the Visual Studio menu bar, select Tools > Get Tools and Features, which opens Visual Studio Installer and displays the Modifying dialog box.

  6. 检查“.NET 桌面开发”工作负荷是否可用。 Check whether the .NET desktop development workload is available. 如果尚未安装该工作负荷,请选中它旁边的复选框,然后选择“修改”以启动安装。 If the workload hasn't been installed, select the check box next to it, and then select Modify to start the installation. 下载和安装过程可能需要几分钟。It may take a few minutes to download and install.

    如果已选中“.NET 桌面开发”旁边的复选框,请选择“关闭”退出对话框。 If the check box next to .NET desktop development is already selected, select Close to exit the dialog box.

    启用 .NET 桌面开发

  7. 关闭 Visual Studio 安装程序。Close Visual Studio Installer.

安装语音 SDKInstall the Speech SDK

下一步是安装语音 SDK NuGet 包,以便可以在代码中引用它。The next step is to install the Speech SDK NuGet package, so you can reference it in the code.

  1. 在解决方案资源管理器中右键单击“helloworld”项目,然后选择“管理 NuGet 包”显示 NuGet 包管理器。 In the Solution Explorer, right-click the helloworld project, and then select Manage NuGet Packages to show the NuGet Package Manager.

    NuGet 包管理器

  2. 在右上角找到“包源” 下拉框,并确保选择了 nuget.orgIn the upper-right corner, find the Package Source drop-down box, and make sure that nuget.org is selected.

  3. 在左上角,选择“浏览” 。In the upper-left corner, select Browse.

  4. 在搜索框中,键入 Microsoft.CognitiveServices.Speech 并按 EnterIn the search box, type Microsoft.CognitiveServices.Speech and select Enter.

  5. 在搜索结果中选择“Microsoft.CognitiveServices.Speech”包,然后选择“安装”以安装最新稳定版本。 From the search results, select the Microsoft.CognitiveServices.Speech package, and then select Install to install the latest stable version.

    安装 Microsoft.CognitiveServices.Speech NuGet 包

  6. 接受所有协议和许可证,开始安装。Accept all agreements and licenses to start the installation.

    安装此包后,“包管理器控制台” 窗口中将显示一条确认消息。After the package is installed, a confirmation appears in the Package Manager Console window.

选择目标体系结构Choose the target architecture

现在,若要生成并运行控制台应用程序,请创建与计算机体系结构匹配的平台配置。Now, to build and run the console application, create a platform configuration matching your computer's architecture.

  1. 在菜单栏中,选择“生成” > “配置管理器” 。From the menu bar, select Build > Configuration Manager. 此时将显示“配置管理器” 对话框。The Configuration Manager dialog box appears.

    “配置管理器”对话框

  2. 在“活动解决方案平台”下拉框中,选择“新建” 。In the Active solution platform drop-down box, select New. 此时将显示“新建解决方案平台” 对话框。The New Solution Platform dialog box appears.

  3. 在“键入或选择新平台” 下拉框中:In the Type or select the new platform drop-down box:

    • 如果运行的是 64 位 Windows,请选择 x64If you're running 64-bit Windows, select x64.
    • 如果运行的是 32 位 Windows,请选择 x86If you're running 32-bit Windows, select x86.
  4. 选择“确定”,然后选择“关闭”。 Select OK and then Close.

添加代码Add the code

接下来,将代码添加到项目。Next, you add code to the project.

  1. 在“解决方案资源管理器”中,打开文件“Program.cs”。 From Solution Explorer, open the file Program.cs.

  2. 将该文件开头位置的 using 语句块替换为以下声明:Replace the block of using statements at the beginning of the file with the following declarations:

    using System;
    using System.Threading.Tasks;
    using Microsoft.CognitiveServices.Speech;
    using Microsoft.CognitiveServices.Speech.Audio;
    using Microsoft.CognitiveServices.Speech.Intent;
    
  3. 将提供的 Main() 方法替换为以下异步等效项:Replace the provided Main() method, with the following asynchronous equivalent:

    public static async Task Main()
    {
        await RecognizeIntentAsync();
        Console.WriteLine("Please press Enter to continue.");
        Console.ReadLine();
    }
    
  4. 创建空的异步方法 RecognizeIntentAsync(),如下所示:Create an empty asynchronous method RecognizeIntentAsync(), as shown here:

    static async Task RecognizeIntentAsync()
    {
    }
    
  5. 在此新方法的正文中添加以下代码:In the body of this new method, add this code:

    // Creates an instance of a speech config with specified subscription key
    // and service region. Note that in contrast to other services supported by
    // the Cognitive Services Speech SDK, the Language Understanding service
    // requires a specific subscription key from https://luis.azure.cn/.
    // The Language Understanding service calls the required key 'endpoint key'.
    // Once you've obtained it, replace with below with your own Language Understanding subscription key
    // and service region (e.g., "chinaeast2").
    // The default language is "en-us".
    var config = SpeechConfig.FromSubscription("YourLanguageUnderstandingSubscriptionKey", "YourLanguageUnderstandingServiceRegion");
    
    // Creates an intent recognizer using microphone as audio input.
    using (var recognizer = new IntentRecognizer(config))
    {
        // Creates a Language Understanding model using the app id, and adds specific intents from your model
        var model = LanguageUnderstandingModel.FromAppId("YourLanguageUnderstandingAppId");
        recognizer.AddIntent(model, "YourLanguageUnderstandingIntentName1", "id1");
        recognizer.AddIntent(model, "YourLanguageUnderstandingIntentName2", "id2");
        recognizer.AddIntent(model, "YourLanguageUnderstandingIntentName3", "any-IntentId-here");
    
        // Starts recognizing.
        Console.WriteLine("Say something...");
    
        // Starts intent recognition, and returns after a single utterance is recognized. The end of a
        // single utterance is determined by listening for silence at the end or until a maximum of 15
        // seconds of audio is processed.  The task returns the recognition text as result. 
        // Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
        // shot recognition like command or query. 
        // For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
        var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
    
        // Checks result.
        if (result.Reason == ResultReason.RecognizedIntent)
        {
            Console.WriteLine($"RECOGNIZED: Text={result.Text}");
            Console.WriteLine($"    Intent Id: {result.IntentId}.");
            Console.WriteLine($"    Language Understanding JSON: {result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult)}.");
        }
        else if (result.Reason == ResultReason.RecognizedSpeech)
        {
            Console.WriteLine($"RECOGNIZED: Text={result.Text}");
            Console.WriteLine($"    Intent not recognized.");
        }
        else if (result.Reason == ResultReason.NoMatch)
        {
            Console.WriteLine($"NOMATCH: Speech could not be recognized.");
        }
        else if (result.Reason == ResultReason.Canceled)
        {
            var cancellation = CancellationDetails.FromResult(result);
            Console.WriteLine($"CANCELED: Reason={cancellation.Reason}");
    
            if (cancellation.Reason == CancellationReason.Error)
            {
                Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
                Console.WriteLine($"CANCELED: ErrorDetails={cancellation.ErrorDetails}");
                Console.WriteLine($"CANCELED: Did you update the subscription info?");
            }
        }
    }
    
  6. 将此方法中的占位符替换为你的 LUIS 订阅密钥、区域和应用 ID,如下所示。Replace the placeholders in this method with your LUIS subscription key, region, and app ID as follows.

    占位符Placeholder 替换为Replace with
    YourLanguageUnderstandingSubscriptionKey LUIS 终结点密钥。Your LUIS endpoint key. 同样,必须从 Azure 仪表板而不是“初学者密钥”获取此项。Again, you must get this item from your Azure dashboard, not a "starter key." 可以在 LUIS 门户中应用的“密钥和终结点”页上(在“管理”下)找到此密钥。 You can find it on your app's Keys and Endpoints page (under Manage) in the LUIS portal.
    YourLanguageUnderstandingServiceRegion LUIS 订阅所在区域的短标识符,例如 chinaeast2 表示“中国东部”。The short identifier for the region your LUIS subscription is in, such as chinaeast2 for China East. 请参阅区域See Regions.
    YourLanguageUnderstandingAppId LUIS 应用 ID。The LUIS app ID. 可以在 LUIS 门户中应用的“设置”页上找到此 ID。You can find it on your app's Settings page in the LUIS portal.

完成这些更改后,可以生成 (Ctrl+Shift+B) 和运行 (F5) 应用程序。 With these changes made, you can build (Control+Shift+B) and run (F5) the application. 出现提示时,请尝试对着电脑麦克风说出“关灯”。When you're prompted, try saying "Turn off the lights" into your PC's microphone. 应用程序会在控制台窗口中显示结果。The application displays the result in the console window.

以下部分包含代码的讨论。The following sections include a discussion of the code.

创建意向识别器Create an intent recognizer

首先,需要基于 LUIS 终结点密钥和区域创建语音配置。First, you need to create a speech configuration from your LUIS endpoint key and region. 可以使用语音配置来创建语音 SDK 的各种功能的识别器。You can use speech configurations to create recognizers for the various capabilities of the Speech SDK. 语音配置提供多种方式用于指定所要使用的订阅;此处我们使用了采用订阅密钥和区域的 FromSubscriptionThe speech configuration has multiple ways to specify the subscription you want to use; here, we use FromSubscription, which takes the subscription key and region.

备注

请使用 LUIS 订阅而不是语音服务订阅的密钥和区域。Use the key and region of your LUIS subscription, not a Speech service subscription.

接下来,使用 new IntentRecognizer(config) 创建意向识别器。Next, create an intent recognizer using new IntentRecognizer(config). 由于配置已知道要使用哪个订阅,因此,在创建识别器时无需再次指定订阅密钥和终结点。Since the configuration already knows which subscription to use, you don't need to specify the subscription key and endpoint again when creating the recognizer.

导入 LUIS 模型并添加意向Import a LUIS model and add intents

现在,使用 LanguageUnderstandingModel.FromAppId() 从 LUIS 应用导入模型,并添加想要通过识别器的 AddIntent() 方法识别的 LUIS 意向。Now import the model from the LUIS app using LanguageUnderstandingModel.FromAppId() and add the LUIS intents that you wish to recognize via the recognizer's AddIntent() method. 这两个步骤会指出用户有可能在其请求中使用的单词,以此提高语音识别的准确性。These two steps improve the accuracy of speech recognition by indicating words that the user is likely to use in their requests. 如果不需要在应用程序中识别应用的所有意向,则不必要添加这些意向。You don't have to add all the app's intents if you don't need to recognize them all in your application.

若要添加意向,必须提供三个参数:LUIS 模型(已创建并命名为 model)、意向名称和意向 ID。To add intents, you must provide three arguments: the LUIS model (which has been created and is named model), the intent name, and an intent ID. ID 与名称之间的差别如下。The difference between the ID and the name is as follows.

AddIntent() 参数AddIntent() argument 目的Purpose
intentName LUIS 应用中定义的意向的名称。The name of the intent as defined in the LUIS app. 此值必须与 LUIS 意向名称完全匹配。This value must match the LUIS intent name exactly.
intentID 语音 SDK 分配给已识别的意向的 ID。An ID assigned to a recognized intent by the Speech SDK. 此值可以是任何内容;不需要对应于 LUIS 应用中定义的意向名称。This value can be whatever you like; it doesn't need to correspond to the intent name as defined in the LUIS app. 例如,如果多个意向由相同的代码处理,则可以对这些意向使用相同的 ID。If multiple intents are handled by the same code, for instance, you could use the same ID for them.

家庭自动化 LUIS 应用具有两个意向:一个意向是打开设备,另一个意向是关闭设备。The Home Automation LUIS app has two intents: one for turning on a device, and another for turning off a device. 以下代码行将这些意向添加到识别器;请将 RecognizeIntentAsync() 方法中的三个 AddIntent 代码行替换为以下代码。The lines below add these intents to the recognizer; replace the three AddIntent lines in the RecognizeIntentAsync() method with this code.

recognizer.AddIntent(model, "HomeAutomation.TurnOff", "off");
recognizer.AddIntent(model, "HomeAutomation.TurnOn", "on");

也可以使用 AddAllIntents 方法将模型中的所有意向都添加到识别器中,而不是添加单个意向。Instead of adding individual intents, you can also use the AddAllIntents method to add all the intents in a model to the recognizer.

开始识别Start recognition

创建识别器并添加意向后,可以开始识别。With the recognizer created and the intents added, recognition can begin. 语音 SDK 支持单次识别和连续识别。The Speech SDK supports both single-shot and continuous recognition.

识别模式Recognition mode 要调用的方法Methods to call 结果Result
单次Single-shot RecognizeOnceAsync() 返回在一个话语后面识别的意向(如果有)。Returns the recognized intent, if any, after one utterance.
连续Continuous StartContinuousRecognitionAsync()
StopContinuousRecognitionAsync()
识别多个言语。有可用结果时发出事件(例如 IntermediateResultReceived)。Recognizes multiple utterances; emits events (for example, IntermediateResultReceived) when results are available.

应用程序使用单次模式,因此调用 RecognizeOnceAsync() 开始识别。The application uses single-shot mode and so calls RecognizeOnceAsync() to begin recognition. 结果是包含有关已识别的意向的信息的 IntentRecognitionResult 对象。The result is an IntentRecognitionResult object containing information about the intent recognized. 使用以下表达式提取 LUIS JSON 响应:You extract the LUIS JSON response by using the following expression:

result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult)

应用程序不会分析 JSON 结果。The application doesn't parse the JSON result. 它只在控制台窗口中显示 JSON 文本。It only displays the JSON text in the console window.

单一 LUIS 识别结果

指定识别语言Specify recognition language

默认情况下,LUIS 可以识别美国英语中的意向 (en-us)。By default, LUIS recognizes intents in US English (en-us). 将区域设置代码分配到语音配置的 SpeechRecognitionLanguage 属性可以识别其他语言的意向。By assigning a locale code to the SpeechRecognitionLanguage property of the speech configuration, you can recognize intents in other languages. 例如,创建识别器之前在应用程序中添加 config.SpeechRecognitionLanguage = "de-de"; 可以识别德语中的意向。For example, add config.SpeechRecognitionLanguage = "de-de"; in our application before creating the recognizer to recognize intents in German. 有关详细信息,请参阅 LUIS 语言支持For more information, see LUIS language support.

从文件中连续识别Continuous recognition from a file

以下代码演示了使用语音 SDK 的其他两项意向识别功能。The following code illustrates two additional capabilities of intent recognition using the Speech SDK. 第一项功能是连续识别:有可用的结果时,识别器会发出事件(参阅前文)。The first, previously mentioned, is continuous recognition, where the recognizer emits events when results are available. 然后,这些事件可由提供的事件处理程序处理。These events can then be processed by event handlers that you provide. 使用连续识别时,需要调用识别器的 StartContinuousRecognitionAsync() 方法来开始识别,而不是调用 RecognizeOnceAsync()With continuous recognition, you call the recognizer's StartContinuousRecognitionAsync() method to start recognition instead of RecognizeOnceAsync().

另一项功能是从 WAV 文件中读取包含要处理的语音的音频。The other capability is reading the audio containing the speech to be processed from a WAV file. 实现涉及到创建音频配置,创建意向识别器时可以使用该配置。Implementation involves creating an audio configuration that can be used when creating the intent recognizer. 该文件必须是采样率为 16 kHz 的单声道(单音)音频。The file must be single-channel (mono) with a sampling rate of 16 kHz.

若要尝试这些功能,请删除或注释掉 RecognizeIntentAsync() 方法的正文,并将正文替换为以下代码。To try out these features, delete or comment out the body of the RecognizeIntentAsync() method, and add the following code in its place.

// Creates an instance of a speech config with specified subscription key
// and service region. Note that in contrast to other services supported by
// the Cognitive Services Speech SDK, the Language Understanding service
// requires a specific subscription key from https://luis.azure.cn/.
// The Language Understanding service calls the required key 'endpoint key'.
// Once you've obtained it, replace with below with your own Language Understanding subscription key
// and service region (e.g., "chinaeast2").
var config = SpeechConfig.FromSubscription("YourLanguageUnderstandingSubscriptionKey", "YourLanguageUnderstandingServiceRegion");

// Creates an intent recognizer using file as audio input.
// Replace with your own audio file name.
using (var audioInput = AudioConfig.FromWavFileInput("whatstheweatherlike.wav"))
{
    using (var recognizer = new IntentRecognizer(config, audioInput))
    {
        // The TaskCompletionSource to stop recognition.
        var stopRecognition = new TaskCompletionSource<int>();

        // Creates a Language Understanding model using the app id, and adds specific intents from your model
        var model = LanguageUnderstandingModel.FromAppId("YourLanguageUnderstandingAppId");
        recognizer.AddIntent(model, "YourLanguageUnderstandingIntentName1", "id1");
        recognizer.AddIntent(model, "YourLanguageUnderstandingIntentName2", "id2");
        recognizer.AddIntent(model, "YourLanguageUnderstandingIntentName3", "any-IntentId-here");

        // Subscribes to events.
        recognizer.Recognizing += (s, e) => {
            Console.WriteLine($"RECOGNIZING: Text={e.Result.Text}");
        };

        recognizer.Recognized += (s, e) => {
            if (e.Result.Reason == ResultReason.RecognizedIntent)
            {
                Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
                Console.WriteLine($"    Intent Id: {e.Result.IntentId}.");
                Console.WriteLine($"    Language Understanding JSON: {e.Result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult)}.");
            }
            else if (e.Result.Reason == ResultReason.RecognizedSpeech)
            {
                Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
                Console.WriteLine($"    Intent not recognized.");
            }
            else if (e.Result.Reason == ResultReason.NoMatch)
            {
                Console.WriteLine($"NOMATCH: Speech could not be recognized.");
            }
        };

        recognizer.Canceled += (s, e) => {
            Console.WriteLine($"CANCELED: Reason={e.Reason}");

            if (e.Reason == CancellationReason.Error)
            {
                Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}");
                Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
                Console.WriteLine($"CANCELED: Did you update the subscription info?");
            }

            stopRecognition.TrySetResult(0);
        };

        recognizer.SessionStarted += (s, e) => {
            Console.WriteLine("\n    Session started event.");
        };

        recognizer.SessionStopped += (s, e) => {
            Console.WriteLine("\n    Session stopped event.");
            Console.WriteLine("\nStop recognition.");
            stopRecognition.TrySetResult(0);
        };


        // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
        Console.WriteLine("Say something...");
        await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);

        // Waits for completion.
        // Use Task.WaitAny to keep the task rooted.
        Task.WaitAny(new[] { stopRecognition.Task });

        // Stops recognition.
        await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
    }
}

如前所述修改代码以包含 LUIS 终结点密钥、区域和应用 ID,并添加家庭自动化意向。Revise the code to include your LUIS endpoint key, region, and app ID and to add the Home Automation intents, as before. whatstheweatherlike.wav 更改为录制的音频文件的名称。Change whatstheweatherlike.wav to the name of your recorded audio file. 然后生成,将音频文件复制到生成目录,并运行应用程序。Then build, copy the audio file to the build directory, and run the application.

例如,如果你讲出“关灯”,暂停,然后在录制的音频文件中讲出“开灯”,则可能会显示类似于下面的控制台输出:For example, if you say "Turn off the lights", pause, and then say "Turn on the lights" in your recorded audio file, console output similar to the following may appear:

音频文件 LUIS 识别结果

示例源代码Sample source code

语音 SDK 在一个开源存储库中积极维护大量的示例。The Speech SDK actively maintains a large set of examples in an open-source repository. 有关示例源代码存储库,请访问 GitHub 上的 Microsoft 认知服务语音 SDKFor the sample source code repository, visit the Microsoft Cognitive Services Speech SDK on GitHub . 其中有适用于 C#、C++、Java、Python、Objective-C、Swift、JavaScript、UWP、Unity 和 Xamarin 的示例。There are samples for C#, C++, Java, Python, Objective-C, Swift, JavaScript, UWP, Unity, and Xamarin.


GitHub

samples/csharp/sharedcontent/console 文件夹中可以找到本文中使用的代码。Look for the code from this article in the samples/csharp/sharedcontent/console folder.

后续步骤Next steps